Thursday, June 29, 2017

Security Data Scientists Without Borders – Thoughts from our first Colloquium

The move to the cloud is changing the security landscape. As a result, there is a surging interest in applying data-driven methods to security. In fact, there is a growing community of talented people focused on security data science. We’ve been shedding our respective “badges” and meeting informally for years, but recently decided to see how much progress we might make against some of our bigger challenges with a more structured and formal exchange of ideas in Redmond. The results far exceeded our expectations. Here’s a bit of what we learned.

The first thing to understand is that academia and industry both focus largely on security detection, but the emphasis is almost always on the algorithmic machinery powering the systems. We at Microsoft are transparent with our algorithm research and in fact are the only cloud provider to openly share the machine learning algorithms securing our cloud service. In order to build on that research and learn more about best practices for putting security data science solutions in production, we reached out to our peers in the industry.

We started by meeting with some friends at Google to swap ideas for keeping our cloud services and mutual customers secure. That one-time exercise proved so valuable that it soon turned into a recurring meeting wherein we learned that despite different approaches to data modeling, we face similar challenges. Last week, we opened the doors at Microsoft to the broader community. At first, we weren’t sure if companies would take us up the offer to discuss security data science issues in the open – nothing could have been farther from the truth. We quickly had delegates from Facebook, Salesforce, Crowdstrike, Google, LinkedIn, Endgame, Sqrrl, the Federal Reserve and researchers from the University of Washington. What was supposed to be an hour-long meetup, morphed into a full-blown conference – so much so, we had to give it a name – “Security Data Science Colloquium”.

The goal of the colloquium was simple: share learnings of how different cloud providers/services secure their systems using machine learning. No NDAs, no complicated back and forth paperwork. Our only constraint: keep it technical and be honest. This way, we could ensure that that the 300+ applied Machine Learning (ML) engineers, security analysts, and incident responders who signed up, had a collaborative environment to discuss freely!

Security Data Science > Security + Data Science

Operationalizing security and machine learning solutions is tricky, not only because security data science solutions are inherently complex from both fields, but also because their intersection poses new challenges. For instance, compliance restrictions that dictate data cannot be exported from specific geographic locations have a downstream effect on model design, deployment, evaluation, and management strategies (a data science constraint). As Adam Fuchs, CTO of Sqrrl, pointed out in his lecture, this complicated machinery requires a variety of actors to land an operational solution: threat hunters, data scientists, computer scientists and security analysts, in addition to the standard development crew of program managers, developers and service engineers.

Security Data Scientists ❤ Rules

To quote Sven Krasser (@SvenKrasser), Chief Scientist at Crowdstrike, “Rules are awesome”. This may come as a surprise to machine learning puritans who have long berated rules as futile tools. But as Sven noted in his talk, rules are very good at finding known maliciousness and we as a community must not shy away from them. During our smaller brainstorm discussions, we discussed various ways to combine rules and machine learning. For instance, at Microsoft, we have had success in using Markov Logic Networks to combine the domain knowledge of our security analysts and model them into probabilistic graphs.

Adversarial Machine Learning is Mainstream and We Don’t Know How to Solve It

Hyrum Anderson (@drhyrum) and Robert Filar’s (@filar) riveting talk on how adversaries can subvert machine learning solutions made defenders in the room uncomfortable (in a good way!). They showed different ways that attackers can successfully manipulate machine learning models, from partial to no access to the system. While instances of such attacks have been known since spammers have tried to evade detection, or when adversaries attempt to dodge antivirus systems, the biggest takeaway here is the Machine learning current system, like any system, is susceptible to attacks. For instance, attackers can use the labels alert outputs, or the decision label (such as malware or not), and work around these defenses. While this has been happening for some time, the game changer is that this feedback is instantaneous: the data that was designed as a way for defenders to act swiftly is now exploited by attackers. Research in this area is nascent, and we still don’t know how to bridge this gap.

Call for standardization and benchmarks

At our breakout sessions, we heard the need for a standardized benchmark dataset à la ImageNet – for instance, how do we know if the newest detection for anomalous process creation performs under various test cases. An interesting observation made by the “Security Platform” discussion group, was the need for something along the lines of “GitHub for feature engineering”. They reckoned that many teams waste time managing feature pipelines and sometimes re-computing the same feature, and wanted an effective management system that will make teams more efficient and code more maintainable.

The colloquium, thanks to the enthusiastic participation of our peers, ended up as a marketplace of security data science ideas – we discussed, agreed, and challenged one another with the intention of learning. My favorite quote about the conference, comes from a Salesforce participant, who remarked “we are all batting for the same team”. It particularly resonated with me, because despite our organizational boundaries, we all have a common goal: protect our customers from adversaries.

This is our commitment to share what we have learned – success and failures, so that you don’t have to waste time going down the wrong path. Given the overwhelming support from the security analytics community, my colleagues have already started planning on the next edition of the colloquium. If you are interested in participating, have ideas to make it better, or want to lend a helping hand in organizing, drop a note at ramk@microsoft.com or reach out to me on Twitter – @ram_ssk.



from Microsoft Secure Blog Staff

What are Confidence building measures (CBMs) and how can they improve cybersecurity?

Cyberspace security is too often viewed through a prism of technological terms and concepts. In my experience, even supposedly non-technical discussions of cyberspace quickly devolve into heated debates about “vulnerability coordination”, “the latest malware”, “the best analytical tools”, “threat information sharing”, and so on. While these are interesting and important topics, it is ultimately people and their personal perspectives – not technology – that largely shape governments’ political, diplomatic and military choices in cyberspace.

At the heart of government’s “human” decision-making in cyberspace are understanding and trust. The two are not the same. It’s possible for one state to understand another’s capabilities in cyberspace but not to trust their intentions. The reverse is also true, with trust existing outside of understanding another’s capabilities. But, by and large, some level of understanding about what another state can and can’t do in cyberspace should at least reduce distrust. And that can help governments make rational judgments about each other’s behaviors as well as de-escalate tensions between and among states.

One significant complication in building understanding and diffusing distrust is the fact that many systems useful in cyber-defense can also be used in cyber-offense. When a state invests in cyber to defend itself, its rivals might instead see a growth in offensive capabilities. This is not a question of technical understanding but rather of reading the intent of others. A very human response to someone seemingly gearing up for conflict is to build at the very least one’s own defenses (and to, potentially, even increase one’s offensive as well as retaliatory capabilities). Such a move is, however, equally liable to misinterpretation by others. Thus, escalation spreads, trust evaporates, and distrust balloons, leaving cyberspace, on which so much of modern life depends, akin to a powder keg, ready to explode. The potential for a cyber arms race is as real as it is dangerous.

An essential response to this critical challenge is the use of confidence building measures (CBMs) between states. Today, CBMs are still generally seen as vectors for instilling good cybersecurity practices, especially during a country’s early entry into cyberspace. Certainly, CBMs can help such countries counter the threat of cybercrime, and can also help promote international consistency in cybersecurity approaches, which is an essential part of combating cybercrime. However, CBMs are much more than this.

Coming of age under the threat of Cold War nuclear annihilation, CBMs enable states to minimize exactly the kind of misunderstandings that fuel distrust and exacerbate tensions. In many ways, they are akin to pressure valves for states to use before a situation escalates into conflict. CBMs can help states step back from thinking, “We need to get our cyber-retaliation in first”. They may not lead directly to trust but what they provide is manifestly better than its absence. They have a manifest role to play in ensuring the safety and stability of cyberspace by reducing the risk of cyberwar from breaking out. As such, they can be a necessary prerequisite to building trust.

CBMs are already being built into critical state-to-state cyberspace agreements. The UNGGE 2015 (voluntary) norms placed CBMs at the core of responsible state behavior in cyberspace. In the UNGGE’s words, they “allow the international community to assess the activities and intention of States”. That assessment of actions and intent is absolutely essential to addressing the human perspective. The UNGGE leveraged previous work done in the framework of the Organization for Security and Co-operation in Europe (OSCE), namely its 2013 CBMs. In this respect, it is significant that just last year the OSCE expanded on its CBM work precisely because, “events in cyberspace often leave room for ambiguity, speculation and misunderstanding. The worry is that miscalculations and misperceptions between states arising from activities in cyberspace could escalate, leading to serious consequences for citizens as well as for the economy and administration, and potentially fueling political tensions.”

A failure to mature and refine CBMs globally adds to distrust and militarization in cyberspace, i.e. the aforementioned cyber arms race. The consequences of the “miscalculations and misperceptions” that the OSCE warned of can easily move from the virtual world to the real one. For example, 2010’s so-called “Pakistan-India cyberwar” saw “cyber armies” from each country vandalizing official websites, exacerbating serious diplomatic and military tensions after the 2008 Mumbai terror attacks. Furthermore, recent tensions between parts of the West and Russia, North Korea or even China all feature strong elements of “cyber-distrust”. The danger, of course, is that once there is “cyber-distrust” among states it is likely spread into other spheres, if left unchecked, and vice versa.

So, if the human perspective matters at least as much as the technology when it comes to government decision-making about cyberspace, all parties should take every opportunity to promote understanding and reduced distrust between states. We should use whatever tools seem most appropriate to do so, . CBMs are essential in this regard. They are and remain a key tool in the cyber peacebuilder’s toolkit.

 



from Paul Nicholas

Tuesday, June 27, 2017

Tips for protecting your information and privacy against cybersecurity threats

This post is authored by Steven Meyers, security operations principal, Microsoft Cyber Defense Operations Center.

Introducing a new video on best practices from the Microsoft Cyber Defense Operations Center

In 2016, 4.2+ billion records were stolen by hackers. The number of cyberattacks and breaches in 2017 have risen 30 percent.

The business sector leads in the number of records compromised so far, with more than 7.5 million exposed records in 420 reported incidents. These cybercrimes are often intended for financial gain, such as opening a fraudulent credit card or accessing a company’s financial records. Today, a growing market exists in the dark web for selling credentials and sensitive information to other cybercriminals.

To help Protect your information and privacy against cyberthreats, the Microsoft Cyber Defense Operations Center has published a series of best practices videos that will help consumers, businesses and organizations enable a safer online environment. This video shares some of the policies and practices that can be used to better protect information and privacy inside and outside of your operational perimeters.

Protection starts with classifying information and then putting appropriate protections in place based on its value. Some information is meant to be public, some data is sensitive but not highly valued to outside entities, but some data is mission critical and/or could cause tremendous financial hardship if shared externally.

Cybersecurity technologies and policies such as multifactor authentication, the principles of least privilege access, just-in-time-and just-enough administrator access, and Microsoft’s cybersecurity products and services can help safeguard access to data and applications.

Some cybersecurity tips discussed include:

  • Classifying emails and data according to their level of sensitivity
  • Employing multifactor authentication for access to sensitive information
  • Only providing administrator access to individuals for the time needed to complete a task
  • Restricting access to only the information needed for the task
  • Keeping your software up-to-date

Please take a few minutes to watch the video and share it with your colleagues, friends and family. We all need to be diligent in the face of this growing and ever-more sophisticated threat.

Also, be sure to watch part one of the video series, Protecting your identity from cybersecurity threats. Check back next week for our third video, Protecting your devices from cybersecurity threats.

Additional resources:



from Microsoft Secure Blog Staff

Sunday, June 25, 2017

BSides Athens 2017 Wrap-Up

The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time!  This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The “boss”, Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called “the smile of the child” which helps Greek kids to remain in touch with the new technologies. A specific project is called “ODYSSEAS” and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).


The keynote was assigned to Dave Lewis who presented “The Unbearable Lightness of Failure”. The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, “failure is an acceptable teaching tool“. The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche’s quote: “That which does not kill us makes us stronger“. We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don’t have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!

Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos’s presentation about “Operation Grand Mars“. In January 20167, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organization, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.

My next choice was to move to the second track to follow Konstantinos Kosmidis‘s talk about machine learning (a hot topic today in many conferences!). I’m not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you’re interested, have a look at his Github repository.

Back to the first track to follow Professor Andrew Blyth with “The Role of Professionalism and Standards in Penetration Testing“. The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the “checked-case” syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests.  But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.

After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about “Detecting and Deceiving the Unknown with Illicium”. They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it’s a new honeypot with extended features.  Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I’d expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don’t do today!

The next presentation was “I Thought I Saw a |-|4><0.-” by Thomas V. Fisher.  Many interesting tips were provided by Thomas like:

  • Understand and define “normal” activities on your network to better detect what is “abnormal”.
  • Log everything!
  • Know your business
  • Keep in mind that the classic cyber kill-chain is not always followed by attackers (they don’t follow rules)
  • The danger is to try to detect malicious stuff based on… assumptions!

The model presented by Thomas was based on 4 A’s: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!

The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF’s (web application firewalls). Ioannis explained first how common WAF’s are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:

  • Formalize the knowledge in code injection attacks variations.
  • Expand the knowledge
  • Cross check for vulnerabilities

Note that LightBulb is available also as a BurpSuipe extension! The code is available here.

Then, Anna Stylianou presented “Car hacking – a real security threat or a media hype?“. The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU (“Engine Control Unit”) that are present in modern cars. Today a car might have >100 ECU’s and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:

  • theft from the car (breaking a window)
  • theft of the cat
  • but today: theft the use of the car (ransomware)

Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.

The next slot was assigned to me. I presented “Unity Makes Strength” and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A “Shortcut” to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim’s computer. I like the “love equation”: Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!

The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It’s now time for some off-days across Greece with the family…

[The post BSides Athens 2017 Wrap-Up has been first published on /dev/random]



from Xavier

Thursday, June 22, 2017

"Hacker Village - At the #SecAwareSummit"

Editor's Note: Taylor Lobbis a security community manager for developers within Adobe. Heis one of the speakers for the upcoming Security Awareness Summit 2/3 Aug in Nashville, TN. Below hegives an overview on hisupcoming talk onbuilding a Hacker Village. I am a manager of security and privacy engineering for Adobe. One of our core goals &hellip; Continue reading Hacker Village - At the #SecAwareSummit

from lspitzner

[SANS ISC] Obfuscating without XOR

I published the following diary on isc.sans.org: “Obfuscating without XOR“.

Malicious files are generated and spread over the wild Internet daily (read: “hourly”). The goal of the attackers is to use files that are:

  • not know by signature-based solutions
  • not easy to read for the human eye

That’s why many obfuscation techniques exist to lure automated tools and security analysts… [Read more]

[The post [SANS ISC] Obfuscating without XOR has been first published on /dev/random]



from Xavier

Wednesday, June 21, 2017

Tips for securing your identity against cybersecurity threats

This post is authored by Simon Pope, Principal Security Group Manager, Microsoft Security Response Center.

Introducing new video on best practices from the Microsoft Cyber Defense Operations Center

Ask any CISO or cybersecurity professional about their greatest security challenge, and it’s a good chance the answer will be “the actions of our people.”

While virtually all employees, contractors, and partners have the best of intentions, the fact is that protecting their online credentials, identifying and avoiding phishing scams, and evading cybercriminals is getting more difficult each day. More of our time each day is spent online, and as more financial transactions and social activities are conducted online, adversaries are becoming ever-more sophisticated in their cyberattacks.

Microsoft faces these same threats, and we have made deep investments in training our people to be more aware and diligent in the face of such dangers. Our cybersecurity success depends on our customers’ trust in our products and services, and their confidence that they can be safe on the internet. To help keep our customers and the global online community safe, we want to share some of our Cyber Defense Operations Center’s best practices for Securing your identity against cybersecurity threats in this video.

In this video, we discuss some best practices around securing your identity, such as avoiding social engineering scams that trick people into giving up their most sensitive secrets, recognizing phishing emails that falsely represent legitimate communications, and how to spot false impersonations of your trusted colleagues or friends. We also discuss some of the types of information you don’t want to share broadly (i.e. credentials, financial information and passwords), and tips for protecting your sensitive data.

Some cybersecurity tips that we discuss include:

  • Be vigilant against phishing emails
  • Be cautious when sharing sensitive information
  • Don’t automatically trust emails from people you know, it may not be from them
  • Keep your software up-to-date

Please take a few minutes to watch the video and share it with your colleagues, friends and family. We all need to be diligent in the face of this growing and ever-more sophisticated threat. And check back next week for our second video on Protecting your devices from cybersecurity threats, and in two weeks, we will share more on Protecting your information and data from cybersecurity threats on the Microsoft Secure blog.

Additional resources:



from Microsoft Secure Blog Staff

Tuesday, June 20, 2017

TLS 1.2 support at Microsoft

This post is authored by Andrew Marshall, Principal Security Program Manager, Trustworthy Computing Security.

In support of our commitment to use best-in-class encryption, Microsoft’s engineering teams are continually upgrading our cryptographic infrastructure. A current area of focus for us is support for TLS 1.2, this involves not only removing the technical hurdles to deprecating older security protocols, but also minimizing the customer impact of these changes. To share our recent experiences in engaging with this work we are today announcing the publication of the “Solving the TLS 1.0 Problem” whitepaper to aid customers in removing dependencies on TLS 1.0/1.1. Microsoft is also working on new functionality to help you assess the impact to your own customers when making these changes.

What can I do today?

Microsoft recommends customers proactively address weak TLS usage by removing TLS 1.0/1.1 dependencies in their environments and disabling TLS 1.0/1.1 at the operating system level where possible. Given the length of time TLS 1.0/1.1 has been supported by the software industry, it is highly recommended that any TLS 1.0/1.1 deprecation plan include the following:

  • Application code analysis to find/fix hardcoded instances of TLS 1.0/1.1.
  • Network endpoint scanning and traffic analysis to identify operating systems using TLS 1.0/1.1 or older protocols.
  • Full regression testing through your entire application stack with TLS 1.0/1.1 and all older security protocols disabled.
  • Migration of legacy operating systems and development libraries/frameworks to versions capable of negotiating TLS 1.2.
  • Compatibility testing across operating systems used by your business to identify any TLS 1.2 support issues.
  • Coordination with your own business partners and customers to notify them of your move to deprecate TLS 1.0/1.1.
  • Understanding which clients may be broken by disabling TLS 1.0/1.1.

Coming soon

To help customers deploy the latest security protocols, we are announcing today that Microsoft will provide support for TLS 1.2 in Windows Server 2008 later this summer.

In conclusion

Learn more about removing dependencies on TLS 1.0/1.1 with this helpful resource:
Solving the TLS 1.0 Problem whitepaper.

Stay tuned for upcoming feature announcements in support of this work.



from Microsoft Secure Blog Staff

Thursday, June 15, 2017

USA has just increased the security for under 19 basketball tournament in Egypt

U.S Boosts security for U19 basketball team going to Egypt

The U.S is about to enter a country that is already being torn apart by violence to defend the under 19 world cup for men in basketball. But they are not going in unprepared.

The 12-star team will receive the same kind of high-level security as all other NBA team players and stars receive. This includes a comprehensive cyber security program that will also protect the players from cyber warriors of foreign countries when they are out on the field.

Read more details 

The post USA has just increased the security for under 19 basketball tournament in Egypt appeared first on Cyber Security Portal.



from Gilbertine Onfroi

Wednesday, June 14, 2017

Cybercrime and freedom of speech – A counterproductive entanglement

This post is authored by Gene Burrus, Assistant General Counsel.

As cybercrime becomes ever more pervasive, the need for states to devote law enforcement resources to battling the problem is apparent. However, states should beware using cybercrime legislation and enforcement resources as a vehicle for restricting speech or controlling content. Doing so risks complicating essential international cooperation and will risk de-legitimizing cybercrime legislation and enforcement. With the growing need for enforcement to thwart cybercriminals, without which the economic and social opportunities of the Internet may well flounder, using “cybercrime” as a label for attacking speech and controlling content may only serve to dilute support, divert resources, and make international cooperation more difficult.

At present over 95 countries either have or are working on cybercrime legislation. This is a good thing, as the more states that have cybercrime laws, especially laws that are largely harmonized to better enable international cooperation, the better for everyone (except the criminals). Cybercrime thrives across borders and between jurisdictions, relying on the internet’s global reach and anonymity, but if cybercriminals are based in a country without adequate cybercrime laws, it becomes even harder to bring them to justice. But defining cybercrime properly is important.

Cybercrime is a word we have all encountered more of in recent years. It tends, rightly so, to bring to mind “hackers”, infiltrating computer systems and disrupting them or stealing from them. However , most cybercrime statutes are actually broader than that. They also cover a whole slew of criminal activity mediated by information communication technology (ICT). They deal with the theft of personal information, from credit card details to social security numbers, which can be used for fraud. It includes acts against property, albeit virtual property, from simple vandalism to sophisticated ransomware. (If “virtual property” sounds too abstract to be a concern, bear in mind that this is the form in which many of our most valuable ideas, from patented designs and trade secrets to copyrighted creative material, are now to be found.) It will increasingly bleed into the real world too, thanks to devices connected to the Internet (will cybercriminals soon be stealing self-drive cars through the Internet of Things?) and due to attacks on critical infrastructures such as power grids (which will also affect issues of national security).

This broad swathe of cybercrime is widely accepted to be “a bad thing” by most governments and on that basis, cooperation among and between governments in pursuing cybercriminals is possible.

However, many countries’ cybercrime legislation also categorizes publishing or transmission of illegal content in a particular country via computer networks or the internet as “cybercrime”. And on this, countries are not in wide agreement. When state’s laws criminalize content that other countries don’t recognize as criminal, and then devote cybercrime enforcement resources to chasing this kind of “crime” rather than what people generally think of as cybercrime, it complicates or prevents international cooperation, discredits cybercrime legislation and enforcement efforts, and diverts resources from solving the serious problem of cybercrime. While there is certainly content that is universally reviled, i.e. child pornography, there are many disagreements about the creation and dissemination of other content, e.g. political materials or art work. For some states, free speech is an exceptionally important principle. For others, the control of offensive or dangerous content is essential. Achieving agreement on how to approach these differences is, frankly, going to be a challenge. Once again the Budapest Convention provides a salient example. In 2006, the Convention was added to by a Protocol that criminalized acts spreading racist and xenophobic content. Even some states that signed up to and ratified the original Convention have proved reluctant to add themselves to the Protocol. This is almost certainly not because of they approve of racist or xenophobic content, it’s simply a complicated issue in the context of their own laws or their perspectives on free speech or legal sovereignty.

If these kinds of disagreements are expanded across other types of content and then brought into the heart of global cooperation against cybercrime, the whole process runs a serious risk of breaking down. States may well be unwilling to cooperate in cybercrime investigations, fearing they might expose people whose actions are in no way criminal by their own standards. And, once again, the only ones to benefit will be the cybercriminals who can play off jurisdictions against one another, ducking and diving across borders and through gaps in legal enforcement.

In many ways, the “cyber” in these “content crimes” is just about distribution and they do not have to be included in cybercrime statutes and enforcement efforts. Because states have different types of speech they want to regulate and different levels free speech they are willing to tolerate, these issues need to be kept separate from efforts to address what everyone agrees on as cybercrime: attacks on data, on property, on infrastructure. Crimes of content creation and distribution, beyond the most universally reviled such as child exploitation, should be dealt with outside of the essential cooperation on cybercrime itself. This will allow governments to work together globally to protect citizens, businesses and their own national security from cybercriminals.



from Microsoft Secure Blog Staff

[SANS ISC] Systemd Could Fallback to Google DNS?

I published the following diary on isc.sans.org: “Systemd Could Fallback to Google DNS?“.

Google is everywhere and provides free services to everyone. Amongst the huge list of services publicly available, there are the Google DNS, well known as 8.8.8.8, 8.8.4.4 (IPv4) and 2001:4860:4860::8888, 2001:4860:4860::8844 (IPv6)… [Read more]

 

[The post [SANS ISC] Systemd Could Fallback to Google DNS? has been first published on /dev/random]



from Xavier

Friday, June 9, 2017

SSTIC 2017 Wrap-Up Day #3

Here is my wrap-up for the last day. Hopefully, after the yesterday’s social event, the organisers had the good idea to start later… The first set of talks was dedicated to presentation tools.

The first slot was assigned to Florian Maury, Sébastien Mainand: “Réutilisez vos scripts d’audit avec PacketWeaver”. When you are performed audit, the same tasks are already performed. And, as we are lazy people, Florian & Sébastien’s idea was to automate such tasks. They can be to get a PCAP, to use another tool like arpspoof, to modify packets using Scapy, etc… The chain can quickly become complex. By automating, it’s also more easy to deploy a proof-of-concept or a demonstration. The tool used a Metasploit-alike interface. You select your modules, you execute them but you can also chain them: the output of script1 is used as input of script2. The available modules are classified par ISO layer:

  • app_l7
  • datalink_l2
  • network_l3
  • phy_l1
  • transport_l4

The tool is available here.

The second slot was about “cpu_rec.py”. This tool has been developed to help in the reconnaissance of architectures in binary files. A binary file contains instructions to be executed by a CPU (like ELF or PE files). But not only files. It is also interesting to recognise firmware’s or memory dumps. At the moment, cpu_rec.py recognise 72 types of architectures. The tool is available here.

And we continue with another tool using machine learning. “Le Machine Learning confronté aux contraintes opérationnelles des systèmes de détection” presented by Anaël Bonneton and Antoine Husson. The purpose is to detect intrusions based on machine learning. The classic approach is to work with systems based on signatures (like IDS). Those rules are developed by experts but can quickly become obsolete to detect newer attacks. Can machine learning help? Anaël and Antoine explained the tool that that developed (“SecuML”) but also the process associated with it. Indeed, the tool must be used in a first phase to learning from samples. The principle is to use a “classifier” that takes files in input (PDF, PE, …) and return the conclusions in output (malicious or not malicious). The tool is based on the scikit-learn Python library and is also available here.

Then, Eric Leblond came on stage to talk about… Suricata of course! His talk title was “À la recherche du méchant perdu”. Suricata is a well-known IDS solution that don’t have to be presented. This time, Eric explained a new feature that has been introduced in Suricata 4.0. A new “target” keyword is available in the JSON output. The idea arise while a blue team / read team exercise. The challenge of the blue team was to detect attackers and is was discovered that it’s not so easy. With classic rules, the source of the attack is usually the source of the flow but it’s not always the case. A good example of a web server returned an error 4xx or 5xx. In this case, the attacker is the destination. The goal of the new keyword is to be used to produce better graphs to visualise attacks. This patch must still be approved and merge in the code. It will also required to update the rules.

The next talk was the only one in English: “Deploying TLS 1.3: the great, the good and the bad: Improving the encrypted web, one round-trip at a time” by Filippo Valsorda & Nick Sullivan. After a brief review of the TLS 1.2 protocol, the new version was reviewed. You must know that, if TLS 1.0, 1.1 and 1.2 were very close to each others, TLS 1.3 is a big jump!. Many changes in the implementation were reviewed. If you’re interested here is a link to the specifications of the new protocol.

After a talk about crypto, we switched immediately to another domain which also uses a lot of abbreviations: telecommunications. The presentation was performed by  Olivier Le Moal, Patrick Ventuzelo, Thomas Coudray and was called “Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone”. VoLTE means “Voice over LTE” and is based on VoIP protocols like SIP. This protocols is already implemented by many operators around the world and, if your mobile phone is compatible, allows you to perform better calls. But the speakers found also some nasty stuff. They explained how VoLTE is working but also how it can leak the position (geolocalization) of your contact just by sending a SIP “INVITE” request.

To complete the first half-day, a funny presentation was made about drones. For many companies, drones are seen as evil stuff and must be prohibited to fly over some  places. The goal of the presented tool is just to prevent drones to fly over a specific place and (maybe) spy. There are already solutions for this: DroneWatch, eagles, DroneDefender or SkyJack. What’s new with DroneJack? It focuses on drones communicating via Wi-Fi (like the Parrot models). Basically, a drone is a flying access point. It is possible to detected them based on their SSID’s and MAC addresses using a simple airodump-ng. Based on the signal is it also possible to estimate how far the drone is flying. As the technologies are based on Wi-Fi there is nothing brand new. If you are interested, the research is available here.

When you had a lunch, what do you do usually? You brush your teeth. Normally, it’s not dangerous but if your toothbrush is connected, it can be worse! Axelle Apvrille presented her research about a connected toothbrush provided by an health insurance company in the USA. The device is connected to a smart phone using a BTLE connection and exchange a lot of data. Of course, without any authentication or any other security control. The toothbrush even continues to expose his MAC address via bluetooth all the tile (you have to remove the battery to turn it off). Axelle did not attached the device itself with reverse the mobile application and the protocol used to communicate between the phone and the brush. She wrote a small tool to communicate with the brush. But she also write an application to simulate a rogue device and communicate with the mobile phone. The next step was of course to analyse the communication between the mobile app and the cloud provided by the health insurance. She found many vulnerabilities to lead to the download of private data (up to picture of kids!). When she reported the vulnerability, her account was just closed by the company! Big fail! If you pay your insurance less by washing your teeth correctly, it’s possible to send fake data to get a nice discount. Excellent presentation from Axelle…

To close the event, the ANSSI came on stage to present a post-incident review of the major security breach that affected the French TV channel TV5 in 2015. Just to remember you, the channel was completely compromised up to affecting the broadcast of programs for several days. The presentation was excellent for everybody interested in forensic investigation and incident handling. In a first part, the timeline of all events that lead to the full-compromise were reviewed. To resume, the overall security level of TV5 was very poor and nothing fancy was used to attack them: contractor’s credentials used, lack of segmentation, default credentials used, expose RDP server on the Internet etc. An interesting fact was the deletion of firmwares on switches and routers that prevented them to reboot properly causing a major DoS. They also deleted VM’s. The second part of the presentation was used to explain all the weaknesses and how to improve / fix them. It was an awesome presentation!

My first edition of SSTIC is now over but I hope not the last one!

[The post SSTIC 2017 Wrap-Up Day #3 has been first published on /dev/random]



from Xavier

The CISO Perspective: Putting lessons from WannaCrypt into practice to avoid future threats

This post is authored by Bret Arsenault, Corporate Vice President and Chief Information Security Officer.

Last month, customers and companies around the world were impacted by the WannaCrypt ransomware attack. Even those not impacted are assessing their risk and taking steps to help prevent such attacks. For everyone, including Microsoft, the attack is a stark reminder of the need for continued focus on security and proven operational techniques. So, after many conversations with my peers in the industry about the attacks in recent weeks and the steps we are each taking to better protect our environments, I wanted to share the common themes that have emerged. I’ve included best practices, technologies and links to more information.

This list is by no means exhaustive, but I hope it is a helpful starting point for those looking for more guidance on how to help protect their environments from present and future threats:

  1. Implement robust update deployment technologies and operational practices so you can deploy updates as consistently and quickly as possible. Companies with complex deployment needs might consider working with IBM BigFix, Landesk/Ivanti, or Microsoft’s System Center Configuration Manager. Our customers can use Windows Update and Windows Update for Business, free of charge. (This is a multi-faceted issue so I’ve added more thoughts below.)
  2. Limit the impact of email as an infection vector. This is particularly important given that more than 90% of cyberattacks start with a phishing email. Developing strong user education and awareness programs can help individual employees identify and avoid phishing emails. Barracuda, FireEye, and Office 365’s Exchange Online Protection and Advanced Threat Protection all provide technology to help prevent phishing and spam emails and other links to malware from getting through to your users.
  3. Ensure the broad deployment of up-to-date anti-malware software. Solutions from industry partners like those in the Microsoft Active Protections Program, as well as technologies like Windows Defender and Advanced Threat Protection, can help protect users and systems from attacks and exploits.
  4. Implement protected backups in the cloud or on-premises, also known as a data protection service. Having multiple versions of your data backed up and protected by measures such as dual factor authentication is a critical layer of protection to help prevent ransomware or malware from compromising your data. Companies can look to vendors like NetApp, CommVault, or Microsoft with Azure Backup for solutions.
  5. Implement multi-factor authentication to protect user identities and minimize the probability of unauthorized access to company resources and data with technologies like RSA SecurID, Ping Identity, Microsoft Authenticator and Windows Hello.
  6. Improve your team’s situational awareness and response capability across your enterprise all the way to the cloud. Cybersecurity attacks are increasingly complex, so businesses need a holistic view of their environment, vulnerability, real-time threat detection, and ideally, the ability to quarantine compromised users and systems. Several companies offer cutting edge capabilities in this regard, including Qualys, Tenable, Rapid7 and Microsoft’s own Azure Security Center and Windows Defender Advanced Threat Protection (WDATP).
  7. Store and analyze your logs to track where an infection starts, how far into your enterprise it went and how to remediate it. Splunk, ArcSight, IBM and Microsoft with our Operations Management Suite – Security all offer capabilities in this area.

Keeping systems up to date is critical so I want to share a few more thoughts about how we approach it as part of our overall security posture. First, there is no one-size-fits-all strategy. A comprehensive approach to operational security – with layers of offense and defense – is critical because attackers will go after every chink in your armor they can find. That said, updating can be difficult in complex environments, and admittedly no environment is 100% secure, but keeping your software up to date is still the number one way to stay secure in a world of motivated attackers and constantly evolving threats.

In terms of how we approach patching and updating at Microsoft, I’m fortunate to have passionate teams working around the clock to limit the impact of infections and update vulnerable systems as quickly as possible. I also know that the Windows team works hard to ensure that they consistently deliver high quality updates that can be trusted by hundreds of millions of users. They conduct thousands of manual and automated tests that cover the core Windows functionality, the most popular and critical applications used by our customers, and the APIs used by our broad ecosystem of Windows apps and developers. The team also reasons over the data, problem and usage reports received from hundreds of millions of devices and triages that real world usage information to proactively understand and fix application compatibility issues as quickly as possible. With all of this context in mind, I want to acknowledge that even more work is needed to make updates easier to deploy and we have teams across the company hard at work improving the experience.

Whether you are a vendor like Microsoft or one of the billions of businesses who count on IT to function, security is a journey, not a destination. That means constant vigilance is required. I hope you find this information helpful on your own journey and as you assess you readiness in light of recent attacks.

You can read more about the WannaCrypt attack in the MSRC Blog, as well as Microsoft President Brad’s Smiths perspective on the need for collaboration across industry, government and customers to improve cybersecurity. Visit our Get Secure, Stay Secure page regularly for additional guidance, including new insights on ransomware prevention in Windows 10.



from Microsoft Secure Blog Staff

SSTIC 2017 Wrap-Up Day #2

Here is my wrap-up for the second day. From my point of view, the morning sessions were quite hard with a lot of papers based on hardware research.

Anaïs Gantet started with “CrashOS : recherche de vulnérabilités système dans les hyperviseurs”. The motivations behind this research are multiple: virtualization of computers is everywhere today, not only on servers but also on workstations. The challenge for the hypervisor (the key component of a virtualization system) is to simulate the exact same hardware platform (same behaviour) for the software. It virtualizes access to the memory and devices. But do they do this in the right way? Hypervisors are software and software have bugs. The approach explained by Anaïs is to build a very specific light OS that could perform a bunch of tests, like fuzzing, against hypervisors. The name of this os is logically “CrashOS“. It proposes an API that is used to script test scenarios. Once booted, tests are executed and results are sent to the display but also to the serial port for debugging purposes. Anaïs demonstrated some tests. Up to today, the following hypervisors have been tested: Ramooflax (project developed by Airbus Security), VMware and Xen. Some tests that returned errors:

  • On VMware, a buffer overflow and crash of the VM when writing a big buffer to COM1.
  • On Xen, a “FAR JMP” instruction should generate a “general protection” failure but it’s not the case.

CrashOS is available here. A nice presentation to start the day!

The next presentation went deeper and focused again on the hardware, more precisely, the PCI Express that we find in many computers. The title was “ProTIP: You Should Know What To Expect From Your Peripherals” by Marion Daubignard, Yves-Alexis Perez. Why could it be interesting to keep an eye on our PCIe extensions? Because they all run some software and have usually a direct access to the host computer resources like memory (for performance reasons). What if the firmware of your NIC could contain some malicious code and search for data in the memory? They describe the PCIe standard which can be seen as a network with CPU, memory, a PCI hierarchy (a switch) and a root complex. How to analyse all the flows passing over a PCIe network? The challenge is to detect the possible paths and alternatives. The best language to achieve this is Prolog (a very old language that I did not use since my study) but still alive. The tool is called “ProTIP” for “Prolog Tester for Information Flow in PCIe networks” and is available here. The topic was interesting when you realise what a PCIe extension could do.

Then, we got a presentation from Chaouki Kasmi, José Lopes Esteves, Mathieu Renard, Ryad Benadjila: “From Academia to Real World: a Practical Guide to Hitag-2 RKE System Analysis“. The talk was dedicated to the Hitag-2 protocols used by “Remote Keyless Entry” with our modern cars. Researches in this domain are not brand new. There was already a paper on it presented at Usenix. The talk really focussing on Hitag-2 (crypto) and was difficult to follow for me.

After the lunch break, Clémentine Maurice talked about accessing the host memory from a browser with Javascript: “De bas en haut : attaques sur la microarchitecture depuis un navigateur web“. She started with a deeply detailed review of how the DRAM memory is working and how to read operations make use a “row buffer” (like a cache). The idea is to be able to detect key presses in the URL bar of Firefox. The amount of work is pretty awesome from an educational point of view but I’ve just one question: how to use this in the real world? If you’re interested, Clémentine published all her publications are available here.

The next talk was interesting for people working on the blue team side. Binacle is a tool developed to make a full-text search on binaries. Guillaume Jeanne explained why full-text search is important and how it fails with classic methods to index binary files. The goal is not only to index “strings” like IP addresses, FQDN but also suite of bytes. After testing several solutions, he found a good one which is not too resources consuming. The tool and his features were explained, with the Yara integration (also a feature to generate new signatures). To be tested for sure! Binacle is available here.

The next tool presented by YaCo: “Rétro-Ingénierie Collaborative” by Benoît Amiaux, Frédéric Grelot, Jérémy Bouétard, Martin Tourneboeuf, Valerian Comiti. YaCo means “Yet another collaborative tool”. The idea is to add a “multi-user” layer to the IDA debugger. By default, users have a local DB used by IDA. The idea is to sync those databases via a Ggitlab server. The synchronisation is performed via a Python plugin. They made a cool demo. YaCo is available here.

Sibyl was the last tool presented today by Camille Mougey. Sibyl helps to identify libraries used in the malicious code. Based on Miasm2, it identifies functions and their side effect. More information is available on the Github repository.

The next talk was about the Android platform: “Breaking Samsung Galaxy Secure Boot through Download mode” presented by Frédéric Basse. He explained the attacks that can be launched against the bootloader of a Samsung Galaxy smartphone via a bug.

Finally, a non-technical talk presented by Martin Untersinger: “Oups, votre élection a été piratée… (Ou pas)”. Martin is a journalist working for the French newspaper “Le Monde”. He already spoke at SSTIC two years ago and came back today to give his view of the “buzz” around the hacking of the election processes around the world. Indeed, today when elections are organised, there are often rumours that this democratic process has been altered due to state-sponsored hackers. It started in the US and also reached France with the Macronleak. A first fact reported by Martin is that information security goes today way beyond the “techies”… Today all the citizens are aware that bad things may happen. It’s not only a technical issue but also a geopolitical issue. Therefore, it is very interesting for journalists. Authorities do not always disclose information about the attribution of the attack because it can be wrong and alter the democratic process of elections. Today documents are published but the attribution remains a political decision. It touchy and may lead to diplomatic issues. Journalists are also facing challenges:

  • Publish leaked docs or not?
  • Are they real or fake?
  • Ignore the information or maybe participle to the disinformation campaign?

But it is clear that good a communication is a key element.

The day closed with the second rump sessions with a lot of submissions (21!). Amongst them, some funny ideas like using machine learning to generate automatic submissions of paper to the SSTIC call for paper, an awesome analysis of the LinkedIn leaked passwords, connected cars, etc… Everybody moved to the city centre to attend the social event with nice food, drinks and lot of interesting conversations.

Today, a lot of tools were presented. The balance between the two types of presentation is interesting. Indeed, if pure security research is interesting, sometimes it is very difficult to use it in the real context of an information system. However, presented tools were quick talks with facts and solutions that can be easily implemented.

[The post SSTIC 2017 Wrap-Up Day #2 has been first published on /dev/random]



from Xavier

Thursday, June 8, 2017

Cross-border cooperation: The road to a more stable and secure Internet

Australia and China have recently agreed to strengthen their bilateral cooperation in cybersecurity. Cooperation between states on cybersecurity is essential in order to combat cross-border cybercrime and to reduce the risks of inter-state cyberwar. Bilateral cybersecurity agreements between states can help build that cooperation. The real goal, however, should be to achieve multi-lateral consensus and agreement as a basis for a much needed Digital Geneva Convention.

The internet is a multi-stakeholder environment. Not only has it become central to businesses and individuals that operate across borders, but thanks to cyberspace the interactions of states are no longer as constrained by geography as they once. A network of bilateral agreements between multiple states can attempt to model that complexity and depth of relationships. However, differences between individual agreements and gaps of coverage between certain states that have no agreements can be exploited by cybercriminals and can also promote misunderstanding or mistrust between states. Multilateral approaches avoid this problem by creating a single, coherent approach, although they are harder to organize, as reconciling the needs and concerns of multiple states is not straightforward.

The Australia-China deal is a good thing, as both countries are undertaking to not conduct or support cyber-enabled theft of intellectual property (IP), trade secrets, etc. with the intent of obtaining competitive advantage. It echoes the US-China cyber agreement in many ways, which has been credited with a decline in attacks on the US emanating from China (notably those attacks have not stopped altogether).

Significantly Australia and China were clear that alongside their bilateral agreement they would observe  multilateral “norms of behavior” that were created in July 2015 by the United Nations Group of Governmental Experts (UNGGE). These norms are the culmination of work over many years (with key reports in 2010 and 2013) to build a genuine international consensus on what responsible states should do and not do in cyberspace. They, and the work the UNGGE has continued to do since then, are extraordinarily important for delivering a workable Digital Geneva Convention.

The UNGGE is preparing for a further report in September of this year, which should be another important step on the road to a more stable and secure Internet. It is not the only international group helping to shape how states behave in cyberspace, and when you look at the range of organizations involved you can begin to detect a broad momentum towards a genuine multilateral agreement on cyberspace. Since 2013 the OSCE, for example, has worked through a series of confidence building measure (CBMs) that should enable states to minimize the risks of misunderstandings and reduce their fear of attack via cyberspace. Equally significant, in early April 2017 the G7 made a major declaration on responsible states behavior in cyberspace, calling explicitly on governments active in cyberspace to abide by laws, to respect norms of behavior, and to foster trust and confidence with other states.

Outside of the “West”, Shanghai Cooperation Organization (SCO) has made its own contributions, which were built on by the Sino-Russian cybersecurity agreement that emerged at around the same time as the bilateral US-China cybersecurity deal, with a similar bilateral pledge not to hack one another. The ASEAN Regional Forum (ARF) has also stepped up its engagement with the state-to-state engagement in cyberspace, running an ASEAN Cyber Capacity Program (ACCP) that builds member states’ capacities, skills base and incident response capabilities. And another regional group, the Organization of American States (OAS), passed a resolution the only a few weeks ago that committed members to increasing cooperation, transparency, predictability and stability in cyberspace through alignment with the UNGGE’s work.

These states and international fora have to be given immense credit for laying the essential foundations for the next, pressing step: the creation of a binding, multilateral agreement between states that protects civilians and civilian infrastructure in cyberspace. In other words, a Digital Geneva Convention. Bilateral agreements, such as those between China and Australia, are helpful and important, of course, but the emphasis for all those involved in cyberspace should be to support the UNGGE and other multilateral fora as they work to create and spread rules, principles and norms for governing state behavior in cyberspace.



from Paul Nicholas

SSTIC 2017 Wrap-Up Day #1

I’m in Rennes, France to attend my very first edition of the SSTIC conference. SSTIC is an event organised in France, by and for French people. The acronym means “Symposium sur la sécurité des technologies de l’information et des communications“. The event has a good reputation about its content but is also known to have a very strong policy to sell tickets. Usually, all of them are sold in a few minutes, spread across 3 waves. I was lucky to get one this year. So, here is my wrap-up! This is already the fifteen edition with a new venue to host 600 security people. A live streaming is also available and a few hundred people are following talks remotely.

The first presentation was performed by  Octave Klaba who’s the CEO of the OVH operator. OVH is a key player on the Internet with many services. It is known via the BGP AS16276. Octave started with a complete overview of the backbone that he build from zero a few years ago. Today, it has a capacity of 11Tpbs and handles 2500 BGP sessions. It’s impressive how this CEO knows his “baby”. The next part of the talk was a deep description of their solution “VAC” deployed to handle DDoS attacks. For information, OVH is handler ~1200 attacks per day! They usually don’t communicate with them, except if some customers are affected (the case of Mirai was provided as an example by Octave). They chose the name “VAC” for “Vacuum Cleaner“. The goal is to clean the traffic as soon as possible before it enters the backbone. An interesting fact about anti-DDoS solutions: it is critical to detect them as soon as possible. Why? Let’s assume that your solution detects a DDoS within x seconds, attackers will launch attacks of less than x seconds. Evil! The “VAC” can be seen as a big proxy and is based on multiple components that can filter specific types of protocols/attacks. Interesting: to better handle some DDoS, the OVH teams reversed some gaming protocols to better understand how they work. Octave described in deep details how the solution has been implemented and is used today… for any customer! This is a free service! It was really crazy to get so many technical details from a… CEO! Respect!

The second talk was “L’administration en silo” by Aurélien Bordes and focused on some best practices for Windows services administration. Aurélien started with a fact: When you ask a company how is the infrastructure organised, they speak usually about users, data, computers, partners but… they don’t mention administrative accounts. From where and how are managed all the resources? Basically, they are three categories of assets. They can be classified based on colours or tiers.

  • Red: resources for admins
  • Yellow: core business
  • Green: computers

The most difficult layer to protect is… the yellow one. After some facts about the security of AD infrastructure,  Aurélien explained how to improve the Kerberos protocol. The solution is based on FAST, a framework to improve the Kerberos protocol. Another interesting tool developed by Aurélien: The Terminal Server Security Auditor. Interesting presentation but my conclusion is that in increase the complexity of Kerberos which is already not easy to master.

During the previous talk, Aurélien presented a slide with potential privilege escalation issues in an Active Directory environment. One of them was the WSUS server. It’s was the topic of the research presented by Romain Coltel and Yves Le Provost. During a pentest engagement, they compromised a network “A” but they also discovered a network “B” completely disconnected from “A”. Completely? Not really, there were WSUS servers communicating between them. After a quick recap of the WSUS server and its features, they explained how they compromised the second network “B” via the WSUS server. Such a server is based on three major components:

  • A Windows service to sync
  • A web service web to talk to clients (configs & push packages)
  • A big database

This database is complex and contains all the data related to patches and systems. Attacking a WSUS server is not new. In 2015, there was a presentation at BlackHat which demonstrated how to perform a man-in-the-middle attack against a WSUS server. But today, Romain and Yves used another approach. They wrote a tool to directly inject fake updates in the database. The important step is to use the stored procedures to not break the database integrity. Note that the tool has a “social engineering” approach and fake info about the malicious patch can be injected too to entice the admin to allow the installation of the fake patch on the target system(s). To be deployed, the “patch” must be a binary signed by Microsoft. Good news, plenty of tools are signed and can be used to perform malicious tasks. They use the tool psexec for the demo:

psexec -> cmd.exe -> net user /add

The DB being synced between different WSUS servers, it was possible to compromise the network “B”. The tool they developed to inject data into the WSUS database is called WUSpendu. A good recommendation is to put WSUS servers in the “red” zone (see above) and to consider them as critical assets. Very interesting presentation!

After two presentations focusing on the Windows world, back to the UNIX world and more precisely Linux with the init system called systemd. Since it was implemented in major Linux distribution, systemd has been the centre of huge debates between the pro-initd and pro-systemd. Same for me, I found it not easy to use, it introduces complexity, etc… But the presentation gave nice tips that could be used to improve the security of daemons started via systemd. A first and basic tip is to not use the root account but many new features are really interesting:

  • seccomp-bpf can be used to disable access to certain syscalls (like chroot() or obsolete syscalls)
  • capacities can be disabled (ex: CAP_NET_BIND_SERVICE)
  • name spaces mount (ex: /etc/secrets is not visible by the service)

Nice quick tips that can be easily implemented!

The next talk was about Landlock by Michael Salaün. The idea is to build a sandbox with unprivileged access rights and to run your application in this restricted space. The perfect example that was used by Michael is a multi-media player. This kind of application includes many parsers and is, therefore, a good candidate to attacks or bugs. The recommended solution is, as always, to write good (read: safe) code and the sandbox must be seen as an extra security control. Michael explained how the sandbox is working and how to implement it. The example with the media player was to allow it to disable write access to the filesystem except if the file is a pipe.

After the lunch, a set of talks was scheduled around the same topic: analysis of code. If started with “Static Analysis and Run-time Assertion checking” by Dillon Pariente, Julien Signoles. The presented Frama-C a framework of C code analysis.

Then Philippe Biondi, Raphaël Rigo, Sarah Zennou, Xavier Mehrenberger presented BinCAT (“Binary Code Analysis Tool”). It can analyse binaries (x86 only) but will never execute code. Just by checking the memory, the register and much other stuff, it can deduce a program behaviour. BinCAT is integrated into IDA. They performed a nice demo of a keygen tool. BinCAT is available here and can also be executed in a Docker container. The last talk in this set was “Désobfuscation binaire: Reconstruction de fonctions virtualisées” by Jonathan Salwan, Marie-Laure Potet, Sébastien Bardin. The principle of the binary protection is to make a binary more difficult to analyse/decode but without changing the original capabilities. This is not the same as a packer. Here there is some kind of virtualization that emulates proprietary bytecode. Those three presentations represented a huge amount of work but were too specific for me.

Then, Geoffroy CoupriePierre Chifflier presented “Writing parsers like it is 2017“. Writing parsers is hard. Just don’t try to write your own parser, you’ll probably fail. But parsers are available in many applications. They are hard to maintain (old code, handwritten, hard to test & refactor). Issues based on parsers can have huge security impacts, just remember the Cloudbleed bleed bug! The proposed solution is to replace classic parsers by something stronger. The criteria’s are: must be memory safe, called by / can call C code and, if possible, no garbage collection process. RUST is a language made to develop parsers like nom. To test it, it has been used in projects like the VLC player and the Suricata IDS. Suricata was a good candidate with many challenges: safety, performance. The candidate protocol was TLS. About VLC and parser, the recent vulnerability affecting the subtitles parser is a perfect example why parsers are critical.

The last talk of the day was about caradoc. Developed by the ANSSI (French agency), it’s a toolbox able to decode PDF files. The goal is not to extract and analyse potentially malicious streams from PDF files. Like the previous talk, the main idea was to avoid parsing issues. After reviewing the basics of the PDF file format, Guillaume Endignoux, Olivier Levillain made two demos. The first one was to open the same PDF file within two readers (Acrobat and Fox-It). The displayed content was not the same. This could be used in phishing campaigns or to defeat the analyst. The second demo was a malicious PDF file that crashed Fox-It but not Adobe (DDoS). Nice tool.

The day ended with a “rump” session (also called lighting talks by other conferences). I’m really happy with the content of the first day. Stay tuned for more details tomorrow! If you want to follow live talks, the streaming is available here.

[The post SSTIC 2017 Wrap-Up Day #1 has been first published on /dev/random]



from Xavier

NIST Cybersecurity Framework: Building on a foundation everyone should learn from

On May 16-17, Microsoft participated in a workshop organized by the National Institute of Standards and Technology (NIST) on its recently released Framework for Improving Critical Infrastructure Cybersecurity (“Cybersecurity Framework”) Draft Version 1.1. It was a useful discussion, not least because it showed NIST’s continuing commitment to engage in genuine multi-stakeholder dialogue in the development of cybersecurity guidelines and risk management practices. As a colleague of mine wrote some time ago, “Proactive, structured engagements, using public consultation, open workshops with diverse stakeholders, including industry experts, and iterative drafts, really does yield products that are more relevant to the challenges at hand and useful to stakeholders.”

The topical additions to Draft Version 1.1 of the Framework, specifically supply chain security and cybersecurity metrics, show both the durability of the overall approach and its ability to accommodate evolving needs. However, changes must be incorporated in a way that preserves and strengthens the Framework’s broad usability. In particular, Microsoft identified two key areas that should be revised consistent with that goal:

  1. Approaches for understanding risk management posture and goals, including the measurement and metrics guidance, should be developed in supplementary documents rather than in the Framework itself because these approaches are not yet sufficiently stable nor adequately mature.
  2. Supply chain risk management should be integrated throughout the Core’s Subcategories and Informative References rather than within the Implementation Tiers to reduce confusion about how to use the Tiers.

Microsoft has supported the Framework since its inception, and it is integrated into our enterprise risk management program. It influences our security risk culture and informs how we communicate about security capability maturity across our senior management and with our Board of Directors. In conversations with customers, partners, and other industry stakeholders, Microsoft has learned that our positive experience is not unique. In fact, since 2014, the Framework has gained broad recognition as effective guidance for cybersecurity risk management due to its applicability across sectors and organizations of different sizes.

This broad usability has meant that the Framework has gained traction internationally. As governments around the world develop, update, and implement legislation, regulation, or guidelines to protect critical infrastructures, the Framework – as a cross-sector baseline to manage cybersecurity risks – can inform these national efforts and promote interoperability across jurisdictions. Italy and Australia, for example, have already done so.  But more can be done. Microsoft continues to advocate for the U.S. Government to promote use of the Framework domestically and abroad. There is not only an opportunity, but rather a need to internationalize the approach of the Framework. Greater use of will help to enhance cybersecurity across the globe, and importantly, advance economic growth.

To do so, the U.S. Government should promote the Framework globally as the keystone economic objective of this Administration’s international strategy and engagements on cyber. Its efforts should be coordinated across agencies and the opportunities afforded by their missions. For example, the Department of Commerce should highlight the benefits of interoperability to other countries’ economies and security in bilateral, multi-lateral, and regional trade missions and negotiations; NIST should move relevant parts of the Framework into an international standards body; and the State Department should translate the Framework into at least the six official languages of the United Nations and promote the Framework in bilateral engagements, regional and multilateral forums.

As a provider of technology products and services to more than one billion customers and around the world, Microsoft is immensely supportive of approaches such as the Cybersecurity Framework. We have collaborated with domestic and international partners on the Framework, and remain committed to working with industry and government to use, promote, and strengthen approaches that are based on both international standards and public-private dialogue and partnership, which this May’s workshop exemplified.

Microsoft submitted comments on Framework Draft Version 1.1.



from Paul Nicholas

Tuesday, June 6, 2017

"Guest Blog - Nudging Towards Security (Touchpoints) - Part 5"

Editor's Note: This is a part of a series of blog posts by Sahil Bansal from Genpact on the topic Nudging Towards Security. What is a touchpoint? A touch point is a point of contact or interaction. Be it any organization, the Information Security team has a lot of user touch points. A few examples &hellip; Continue reading Guest Blog - Nudging Towards Security (Touchpoints) - Part 5

from lspitzner