Thursday, March 30, 2017

How to solve the diversity problem in security

This post is authored by Ann Johnson, Vice President, Enterprise Cybersecurity Group.

I was in the midst of composing this blog on diversity in cybersecurity when a Fortune article on Women in Cybersecurity found its way to my LinkedIn feed. It was promoted to me by a man I know and respect. As I reflected on the content of this piece in the context of my post, a key detail leapt out at me. It was a male member of the cybersecurity industry advocating for women in this instance. So, what does it all mean?

I have enjoyed a technology career to date spanning 30 years. I have been fortunate to encounter amazing mentors along the way, female and male, many of whom I met very early in my career. My professional experiences, good and bad, successes and failures, have shaped who I am today. Through those experiences, I have become convinced we need more diversity in cybersecurity. Whilst there are no easy answers to solving this problem, understanding some of the root causes will help inform our decisions.

We need to hire and mentor more women and diverse talent in security not only because it is the right thing to do, but also because gaining the advantage in fighting cybercrime depends on it. If we do not diversify the cyber talent pool:

  • We are not likely to fill the estimated 1M+ global cybersecurity openings.
  • We will continue to engender group thinking among a few “experts” with similar backgrounds. Remember: diversity is not just about the color of our skin, gender, religious or ethnic background, it is also about being surrounded by people whose varied experiences contribute new ideas to problem solving.
  • We become weaker relative to our adversaries. Cybercriminals will continue to exploit the unconscious bias inherent in the industry by understanding and circumventing the homogeneity of our methods. If we are to win the cyberwars through the element of surprise, we need to make our strategy less predictable.

I firmly believe most bias is unconscious. Certainly, conscious bias exists, but in my view the majority are doing the best they can with the background and experiences that have shaped their lives. We tend to mentor and hire people we know and trust. If our professional sphere is limited to a certain segment of the population, then the hiring pool simply replicates the makeup of our network.

The cybersecurity industry has historically been predominantly male for a few reasons:

  • Women pursue STEM education at a lower ratio than men.
  • Many cybersecurity professionals come from traditional law enforcement or investigative backgrounds, and these industries are currently male majorities.
  • Women are reluctant to pursue careers in cyber because they don’t see themselves reflected in the employee pool, thereby creating a self-perpetuating cycle.

Given the serious implications the lack of diversity has for cybersecurity, how do we attract, recruit, mentor and retain a broader more inclusive workforce? The answer lies with a programmatic approach where we continuously measure effectiveness and adapt accordingly. The below steps, while not easy, and certainly not exhaustive, are imperative and urgent. The bad actors are well-funded and organized – innovating their methods, and growing their numbers – certain to become a permanent fixture of our digital future. Our ability to remain a step ahead is dependent on evolving our tools and talent through the following:

  • College recruiting. This is a must. Microsoft has a robust college hiring program and we make a conscious effort to include this talent on our security teams. We invest heavily in intern opportunities and new graduate hiring programs. We are not the only company to do so, but we need more firms to join us with a commitment to well executed and measured programs. We are also building a relationship with the Security Advisor Alliance which runs meaningful programs at both the high school and college levels, to provide cybersecurity education and industry recruiting.
  • Participation in our own rescue. I heard this expression a few years ago in a training class, and it stuck. The cybersecurity industry created this diversity problem, so we bear the onus to find a solution. We need to make training and retraining programs available to technical as well as non-technical talent, making cybersecurity a viable path. Including training options for those with non-technical degrees is key to addressing our well documented talent shortage in cyber. I know that this can work first hand. I was law school-bound with a degree in Communication and Political Science, when I decided that a technology career was more apt. By spending time on the go-to-market side and taking advantage of every vendor program available to further my technical training, I fulfilled my desired path.
  • Participation in organizations that promote diversity in cybersecurity. There are many who are tackling this initiative, but two that come to mind are: International Consortium of Minority Cybersecurity Professionals and #brainbabe.
  • Education on unconscious bias. I mentioned earlier that I believe most people are not aware of the language or behavior that implies bias. There is no intent to offend on their part. They are simply reflecting their life experience. Unfortunately, if you are a diverse person who works in these environments, you may not feel welcomed and often you choose to leave. You certainly won’t recommend these companies or work environments to your peer group – thus furthering the diversity gap. It is imperative that we educate about unconscious bias to address this issue.
  • Realization that all of us are smarter than one of us. Our CEO Satya Nadella says this on a regular basis to remind us that working through and with teams makes us all better. And working with team members that bring diverse perspectives and thoughts can only elevate team creativity and effectiveness.
  • Tailored mentorship. Recruitment and training programs alone will not change the cybersecurity employee landscape short-term. Diverse talent needs to hear from group members who have succeeded in cyber. Mentors that are trained and incented to grow group diversity are key to breaking stereotypes and misconceptions, as well as fostering optimism in those who would elect to pursue cybersecurity careers.

We will only solve the diversity problem as an industry. The industry’s conferences are all tackling diversity through meaningful dialogue which will hopefully lead to further investments. It is time for everyone to embrace a cybersecurity future where all who feel they can make a positive impact are welcomed, and our ability to recruit and retain these persons is free of the caveats and excuses of the past.



from Microsoft Secure Blog Staff

[SANS ISC] Diverting built-in features for the bad

I published the following diary on isc.sans.org: “Diverting built-in features for the bad“.

Sometimes you may find very small pieces of malicious code. Yesterday, I caught this very small Javascript sample with only 2 lines of code… [Read more]

 

[The post [SANS ISC] Diverting built-in features for the bad has been first published on /dev/random]



from Xavier

Wednesday, March 29, 2017

Future-proofing principles against technological change

In recent years, governments’ concerns about cybersecurity, data protection, and other information and communications technology (ICT) related issues have led to new policies, legislation, and regulation. In response, the ICT industry has consistently called for laws and rules that focus on outcomes and on principles, rather than on processes and prescriptions. This call has become so ubiquitous, however, that there is a danger it has become a hollow form of words. A truly outcome-oriented approach would be revolutionary and perhaps government or even industry will shy away from it, having forgotten why we need this approach in the first place.

So, I’d like to take a moment to re-examine the why and the how of outcomes- and principles-led legislation and regulation.

Technology moves fast; in 2007 we had the first iPhone and now we’re rolling out cloud computing. As a result, laws designed for telephony or paper files are increasingly difficult to apply, if not wholly irrelevant. Governments are acting on this realization, but as they do so they are inevitably looking to enshrine certain unchangeable points of principle into their new laws – from European privacy to American freedom of speech. And this where the essential rationale for principles-led approaches is most obvious. Immovable principles could be laid down as particular behaviors within particular technologies but then they would live and die with that technology. Allowing unchangeable points of principle to become contingent on something we know will change, i.e. technology, won’t work for governments or societies. A different approach is needed, one that future-proofs our principles against technological change.

So how would that actually work? On the surface it seems simple enough: governments state the outcomes they expect or principles they demand, give whatever limited controls/incentives they think necessary, and allow ICT providers and regulators to get on with it. The reality is necessarily more complex. For one thing, even within a single nation there may be varied societal perspectives on what is wanted in principle. For another, the outcomes of today’s solutions can form tomorrow’s problems. In light of this, an effective “future-proofing” process may require new policy or regulatory bodies that are more flexible and more broad-based, because they can take account of divergent priorities and can also look more clearly at future consequences.

In the ferment of technological change, we can forget that society changes too, sometimes profoundly. Once concrete principles can shift over time and what was once acceptable or helpful can cease to be so. Amusingly, I know someone who is a Freeman of the City of London, with a right to drive sheep across a bridge over the Thames. This might have been very useful at one point but today most people would rather have free parking. More seriously, in the past women and minorities have been unfairly treated (and in places still are, even today). Applying this insight to the heart of law- and rule-making might seem odd, especially to the lawyers and technocrats that currently dominate the process. But if a principles-led approach is to have true meaning and longevity then the inclusiveness of the process must be genuine.

Equally, what seems like a good solution today can have unintended consequences. In the 1890s motorized vehicles solved horse-drawn vehicles’ endless manure and carcasses but eventually led to pollution and transport crises. In the 1920s lead in petrol solved “knocking” in automobile motors but paved the way for “a catastrophe for public health”. In the 1990s and 2000s diesel and biofuels answered petrol’s CO2-emissions but caused particulate air pollution and food supply problems. The chance of unintended consequences from government interventions in ICT are even more significant. Technology has spread throughout our lives, businesses, and governments. As a result, unexpectedly problematic outcomes are more likely and are potentially more damaging. Any structure process pushing an outcomes-led approach needs to have the breadth of insights and expertise to minimize this risk. This means, once again, expanding the participants in the new approach beyond the current roster of legal and regulatory experts.

In conclusion, in order to help formulate lasting policies, law and regulations with a genuine focus on outcomes and principles, governments will need advice from new bodies with diverse legal, technical, social, and even philosophical membership. The new, non-traditional membership of these bodies will likely have to go beyond current “public private partnerships” if they are to deal with the operational differences, varying priorities, and distinct needs of those affected by new rules – now and in the foreseeable future. This will be a revolution in policy-making, equal in its own way to the technological revolution that has sparked it.

 

 



from Paul Nicholas

[SANS ISC] Logical & Physical Security Correlation

I published the following diary on isc.sans.org: “Logical & Physical Security Correlation“.

Today, I would like to review an example how we can improve our daily security operations or, for our users, how to help in detecting suspicious content. Last week, I received the following email in my corporate mailbox. The mail is written in French but easy to understand: It is a notification regarding a failed delivery (they pretended that nobody was present at the delivery address to pick up the goods)… [Read more]

[The post [SANS ISC] Logical & Physical Security Correlation has been first published on /dev/random]



from Xavier

Tuesday, March 28, 2017

Germany steps up leadership in cybersecurity

Cyberattacks are on the rise worldwide, but many countries are making strides in promoting and developing cybersecurity by developing policy frameworks, encouraging investment in research and development, and by driving awareness of cybersecurity best practices. Germany is one of the countries that has been trying to increase the cybersecurity of its broader online ecosystem for a number of years and is today more committed to that goal than ever. And what Germany does matters not just because it is one of the top five global economies, but because it is one of the leading European Union (EU) member states. What German policy-makers think and feel can have a major effect on the EU, a trading block of 500 million people with a GDP – on a par with the USA.

Microsoft’s Security Intelligence Report (SIR) shows Germany performs well compared to the global average when it comes to encounters with malware and the scale of infected computers (see the regional breakdown specific to Germany). Overall, the SIR shows the ongoing nature of the conflict between those delivering cybersecurity and those trying to break through, and even in the Germany of 2016 there was an uneven but upwards trend in encounters and infections.

A fundamental part of responding to these threats and the potentially significant economic damage they pose is, in my view, cooperation between government and the private sector. The new cybersecurity strategy seems to indicate that this is also the view of German policy-makers. Germany’s recognition of the importance of developing and implementing effective cyber security norms – along with the necessary means of verification/attribution – is very encouraging. And German support and leadership in the pertinent multi-lateral discussions will be crucial. In this context, it is worth noting that German leadership, during its 2016 Chairmanship of the Organization for Security and Co-operation in Europe, yielded concrete positive results in the related field of developing cybersecurity related confidence-building measures – which critically rely on different segments of society working together.

The strategy builds on Germany’s IT Security Law (IT-SiG), passed in 2015, which promoted cooperation between the German Federal Office for Information Security (BSI) and the industry in protecting critical infrastructure. Infrastructure protection is, of course, only one aspect of cybersecurity, and cooperation between governments and the private sector is only one part of the overall solution (for example, my Microsoft colleagues have also been arguing strongly for risk-based approaches to cybersecurity). Nonetheless, both the IT-SiG and the proposed strategy seem to be steps in the right direction. Cooperation between states and the private sector, including those who create information and communication technology (ICT) products and those who use them, seems like a very good way to develop effective cybersecurity policies and practices. What is true for Germany should be equally true for other EU member states.

The challenge is that, currently, not all companies may be happy about information exchange with the authorities (only 13 percent of companies in Germany are). It would be a terrible irony that just as governments realize the need for public-private partnerships in cybersecurity, companies start to step back from the opportunity. To prevent such a development, IT regulators will have to demonstrate the added value of receiving this information. They can do this by anonymizing it, and then sharing it with those private sector entities that need to know about it, and then acting on it to protect their systems and their customers.

Looking ahead, in order to enhance IT security in general and increase the protection of critical infrastructure in particular, public-private partnerships are essential, but they require commitment and buy-in from both sides. Microsoft is ready to play its part.

 



from Paul Nicholas

Monday, March 27, 2017

Giving CISOs assurance in the cloud

This post is authored by Mark McIntyre, Chief Security Advisor, Enterprise Cybersecurity Group.

Recently, I hosted a Chief Information Security Officer roundtable in Washington, DC. Executives from several US government agencies and systems integrators attended to share cloud security concerns and challenges, such as balancing collaboration and productivity against data protection needs, cyber threat detection, and compliance. Toward the end of the day, one CISO reminded me he needed assurance. He asked, “How can we trust Microsoft to protect our data? And, how can I believe what you say?”

This post provides an opportunity to share important updates and assurances about practices and resources that Microsoft uses to protect data and user privacy in the Cloud. It also offers information on resources available to CISOs and others, that demonstrate our continuing investments in transparency.

Security at scale

Increasingly, government officials as well as industry analysts and executives are recognizing and evangelizing the security benefits of moving to hyper-scale cloud service providers.  Microsoft works at this scale, investing $15B in the public cloud.  The internet user maps below provide useful insight into why and where we are making these investments. Figure 1 represents internet usage in 2015. The size of the boxes reflect numbers of users.  The colors indicate the percentage of people with access to the internet.

Figure 1, source “Cyberspace 2025: Today’s Decisions, Tomorrow’s Terrain

Now look at Figure 2, showing expected internet usage in 2025.  As you can see, global internet use and accompanying economic activity will continue to grow.

Figure 2

In addition to serving millions of people around the world, we are also moving Microsoft’s 100,000+ employees and our corporate infrastructure and data to the Cloud. We must therefore be confident that we can protect our resources as well as our users’.

How do we do it?  Microsoft invests over $1B per year in cybersecurity and data protection.  We start by ensuring that the software powering our data centers is designed, built and maintained as securely as possible. This video illustrates the world-class security Microsoft applies to data center protection.  We also continue to improve on years of development investments in the Security Development Lifecycle (SDL), to ensure that security is addressed at the very beginning stages of any product or service.  In the Cloud, the Operational Security Assurance framework capitalizes on the SDL and on Microsoft’s deep insights into the cybersecurity threat landscape.

One way that Microsoft detects cybersecurity activity in our data centers is the Intelligent Security Graph. Microsoft has incredible breadth and depth of signal and information we analyze from 450B authentications per month across our cloud services, 400B emails scanned for spam and malware, over a billion enterprise and consumer devices updated monthly, and 18B+ Bing scans per month. This intelligence, enhanced by rich expertise of Microsoft’s world class talent of security researchers, analysts, hunters, and engineers, is built into our products and our platform – enabling customers, and Microsoft, to detect and respond to threats more quickly. (Figures 3 & 4).  Microsoft security teams use the graph to correlate large-scale critical security events, using innovative cloud-first machine learning and behavior and anomaly-based search queries, to surface actionable intelligence.  The graph enables teams to collaborate internally and apply preventive measures or mitigations in near real-time to counter cyber threats.  This supports protection for users around the world, and assures CISOs that Microsoft has the breadth and scale to monitor and protect users’ identities, devices, apps and data, and infrastructure.

Figure 3

Figure 4

Access to data

Technology is critical for advancing security at hyper-scale, therefore Microsoft continues to evolve the ways in which administrators access corporate assets.  The role of network administrators is significant. In our cloud services, we employ Just Enough and Just Enough Administration access, under which admins are provided the bare minimum window of time and physical and logical access to carry out a validated task.  No admin may create or approve their own ticket, either. Further, Windows Server 2016 clients can implement these policies internally. Security and managing data centers at scale is an ever evolving process based on the needs of our customers, the changing threat landscape, regulatory environments and more.

Compliance

Microsoft works with auditors and regulators around the world to ensure that we operate data centers at the highest levels of security and operational excellence.  We maintain the largest compliance portfolio in the industry, for example against the ISO 22301 privacy standard. In addition, Microsoft maintains certifications such as CSA STAR Certification, HITRUST, FACT and CDSA which many of our cloud competitors do not.  For more about Microsoft certifications, visit the Microsoft Trust Center Compliance page.

Transparency

Being compliant with local, industry, and international standards establishes that Microsoft is trustworthy, but our goal is to be trusted.  Toward that end—and to ensure we address the needs of CISOs, Microsoft provides a wealth of information about cloud services, designed to provide direct and customer self-service opportunities to answer three key questions:

  • How is may data secured and protected?
  • How does Microsoft Cloud help me be compliant with my regulatory needs?
  • How does Microsoft manage privacy around my data?

The comments at our roundtable that prompted this blog show that our cloud security and compliance resources can be difficult to find, so while we double down on our efforts to raise awareness, bookmark this update and read below.  We operate the following portals, designed to facilitate self-service access to security and compliance information, FAQs and white papers, in convenient formats, and tailored to an organization’s geography, industry and subscription(s):

  • The Microsoft Trust Center, a centralized resource for enterprise customers to find answers about what Microsoft is doing to protect data, comply with regulatory requirements, and verify that we are doing what we say.
  • The Service Trust Portal (STP) is available for organizations under nondisclosure to current and potential Microsoft customers. It includes hundreds of important third-party audit reports, information on certifications, and internal security documents, for Azure, O365, Dynamics CRM Online, and Yammer. Examples include SOC and ISO audits reports.
  • The Service Assurance Portal, available to current O365 users, offers the same level of access but directly through the O365 subscription. This is a unique “transparency window” to provide customers with in-depth understanding in how we implement and test controls to manage confidentiality, integrity, availability, reliability, and privacy around customer data. Not only do we share the “what” about controls, but also the “how” about testing and implementation.

Government Security Program

Microsoft also participates in the Government Security Program as another key transparency initiative. Through the GSP, national governments (including regulators) may access deep architecture details about our products and services, up to and including source code. The GSP also provides participants with opportunities to visit Microsoft headquarters in Redmond to meet face to face with the teams that operate, monitor, and defend our company and products and services—including data centers—from cyber threats. They can also visit any of our Transparency Centers in Redmond, Brussels, Brasilia, and Singapore. Several dozen governments around the world use the GSP to obtain greater insight into how Microsoft builds, operates and defends its data centers, and by extension, how we protect users.

Microsoft stands ready to work with CISOs to raise awareness and ensure access to the resources discussed above. Visit the following sites to learn more. Microsoft has also created a dedicated team of cybersecurity professionals to help move you securely to the Cloud and protect your data. Learn more about the Enterprise Cybersecurity Group, or contact your local Microsoft representative.

Blogs: Microsoft Secure Blog and Microsoft On the Issues
Learn more about the Microsoft Enterprise Cloud
Read the Microsoft Security Intelligence Report
Follow us on Twitter: @MSFTSecurity



from Microsoft Secure Blog Staff

What you need to know about CASBs

Per Frost & Sullivan, more than 80 percent of employees admit to using non-approved SaaS apps in their jobs. The number of cloud services used by corporate employees is also quickly outpacing internal IT estimates. While IT groups typically estimate that employees are using 51 different services, the actual number is 15 times greater.

And it’s not just individual employees that are turning to shadow IT. Increasingly, non-approved SaaS applications are being adopted by entire work groups or departments, without IT’s knowledge, and with little consideration for the security risks they bring.

As employees continue to reach for tools and services that may not be IT-approved, IT professionals know they need to balance security risk tolerance while empowering departments and teams to achieve higher productivity.

How can you secure critical data without compromising productivity?

While the urge to block shadow IT is understandable, it’s at best, a short-term solution. Not only does it reduce an organization’s ability to innovate, it inevitably results in employees finding ways around the restrictions.

Rather than blocking users from accessing the services they need to do their jobs efficiently, IT administrators need to find ways to monitor these services, analyze their risk profile, and offer alternatives for apps that fail to meet security or compliance needs.

Cloud Access Security Brokers: Flexibility meets control

According to Gartner, Cloud Access Security Brokers (CASBs) are “on-premises or cloud-based security policy enforcement points that are placed between cloud service consumers and cloud service providers.”

They give organizations a detailed picture of how their employees are using the cloud.

  • Which apps are they using?
  • Are these apps risky for my organization?
  • Who are the top users?
  • What does the upload/download traffic look like?
  • Are there any anomalies in user behavior such as: impossible travel, failed logon attempts, suspicious IPs?

Such behaviors can indicate whether their account has been compromised or whether the worker is taking unauthorized actions.

Along with better threat protection, CASBs offer IT professionals better visibility and control over the apps used in their environment. Once you have discovered the full extent of the apps used in your environment, you can then set policies that control the data stored in these apps for data loss prevention.

Exploring a CASB solution can be a great step to enhancing your security environment. With better visibility, protection, and management over your shadow IT, you can give employees the choice to use the apps they need, without sacrificing the security and compliance your organization demands.

To learn more about shadow IT and how CASBs can help your organization, download the e-book.



from Microsoft Secure Blog Staff

Friday, March 24, 2017

[SANS ISC] Nicely Obfuscated JavaScript Sample

I published the following diary on isc.sans.org: “Nicely Obfuscated JavaScript Sample“.

One of our readers sent us an interesting sample that was captured by his anti-spam. The suspicious email had an HTML file attached to it. By having a look at the file manually, it is heavily obfuscated and the payload is encoded in a unique variable… [Read more]

[The post [SANS ISC] Nicely Obfuscated JavaScript Sample has been first published on /dev/random]



from Xavier

TROOPERS 2017 Day #4 Wrap-Up

I’m just back from Heidelberg so here is the last wrap-up for the TROOPERS 2017 edition. This day was a little bit more difficult due to the fatigue and the social event of yesterday. That’s why the wrap-up will be shorter…  The second keynote was presented by Mara Tam: “Magical thinking … and how to thwart it”. Mara is an advisor to executive agencies on information security issues. Her job focuses on the technical and strategic implications of regulatory and policy activity. In my opinion, the keynote topic remained classic. Mara explained her vision of the problems that the infosec community is facing when information must be exchanged with “the outside world”. Personally, I found that Mara’s slides were difficult to understand. 

For the first presentation of the day, I stayed in the main room – the offensive track – to follow “How we hacked distributed configuration management systems” by Francis Alexander and Bharadwaj Machhiraju. As we already saw yesterday with the SDN, automation is a hot topic and companies tend to install specific tools to automate the configuration tasks. Such software is called DCM or “Distributed Configuration Management”. They simplify the maintenance of complex infrastructures, the synchronization and service discovery. But, as software, they also have bugs or vulnerabilities and are, for a pentester, a nice target. They’re real goldmines because they content not only configuration files but if the attacker can change them, it’s even more dangerous. DCM can be agent-less or -based on agents. This is the second case that was the target of Francis & Bharadwaj. They reviewed three tools:

  • HashiCorp Consul
  • Apache Zookeeper
  • CoreOS etcd

For each tool, that explained the vulnerability they found and how it was exploited up to remote code execution. The crazy story is that all of them do not have authentication enabled by default! To automate the search for DCM and exploitation, they developed a specific tool called Garfield. Nice demos were performed during the talk with many remote shells and calc.exe spawned here and there.

The next talk was my favourite of today. It was about a tool called Ruller to pivot through Exchange servers. Etienne Stamens presented his research on Microsoft Exchange and how he reverse engineered the protocol. The goal is just to get a shell though Exchange. The classic phases of an attack were reviewed:

  • Reconnaissance
  • Exploitation
  • Persistence (always better!)

Basically, Exchange is a mail server but many more features are available: calendar, Lync, Skype, etc. Exchange must be able to serve local and remote users so it exposes services on the Internet. How do identify companies that use an Exchange server and how to find it? Simply thanks to the auto-discovery feature that is implemented by Microsoft. If your domain is company.com, Outlook will search for https://company.com/autodiscover/autodiscover.xml (+ other alternatives URLs if this one isn’t useful). Etienne did some research and found that 10% of the Internet domains have this process enabled. After some triage, he found that approximatively 26000 domains are linked to an Exchange server. Nice attack surface! The next step is to compromise at least one account. Here, classic methods can be used (brute-force, rogue wireless AP, phishing or dumps of leaked databases). The exploitation in itself is performed by creating a rule that will execute a script. The rule looks like “When the word “pwned” is present in the subject, start “badprogram.exe”. A very nice finding is the way Windows converts UNC path to webdav:

\\host.com@SSL\webdav\pew.zip\s.exe

will be converted to:

https://host.com/webdab/pew.zip

And Windows will even extract s.exe for you! Magic!

Etienne performed a nice demo of Ruler which automates all the process described above. Then, he demonstrated another tool called Linial which takes care of the persistence. To conclude, Etienne explained briefly how to harden Exchange to prevent this kind of attack. Outlook 2016 blocks unsafe rules by default which is good. An alternative is to block WebDAV and use MFA.

After the lunch, Zoz came back with another funny presentation: “Data Demolition: Gone in 60 seconds!”. The idea is simple: When you throw away some devices, you must be sure that they don’t contain remaining critical data. Classic examples are HD’s and printers. Also extremely mobiles devices like drones. The talk was some kind of a “Myth Busters” show for hard drives! Different techniques were tested by Zoz:

  • Thermal
  • Kinetic
  • Electric

For each of them, different scenarios were presented and the results demonstrated with small videos. Crazy!

Destroying HD's

What was interesting to notice is that most techniques failed because the disk plates could still be “cleaned” (ex: removing the dust) and become maybe readable by using forensic techniques. For your information, the most feasible techniques were: Plasma cutter or oxygen injector, nailguns and HV Power spike. Just one advice: don’t try this at home!

There was a surprise talk scheduled. The time slot was offered to The Grugq. Renowned security researcher, he presented “Surprise Bitches, Enterprise Security Lessons From Terrorists”. He talked about APT’s but not as a buzzword. He gave his own view of how APT’s work. For him, the term “APT” was invented by Americans and means: “Asia Pacific Threat“.

I finished the day back to offensive track. Mike Ossmann and Dominic Spill from Great Scott Gadgets presented “Exploring the Infrared, part 2“. The first part was presented at Schmoocon. Hopefully, they started with a quick recap. What is the infrared light and its applications (remote control, sensors, communications, heating systems, …). The talk was a suite of nice demos where they use replay attack techniques to abuse of tools/toys that work with IR like a Dunk Hunt game, a shark remote controller. The most impressive one was the replay attack against the Bosh audio transmitter. This very expensive device is used in big events for instant translations. They were able to reverse engineer the protocol and were able to play a song through the device… You can imagine the impact of such attack in a live event (ex: switching voices, replacing translations by others, etc). They have many more tests in the pipe.

IR Fun

The last talk was “Blinded Random Block Corruption” by Rodrigo Branco. Rodrigo is a regular speaker at TROOPERS and provides always good content. His presentation was very impressive. They idea is to evaluate the problems around memory encryption? How and why to use it? Physical access to the victim is the worse case. An attacker has access to anything. You implemented full-disk encryption? Cool but many info are in the memory when the system is running. Access to memory can be performed via Firewire, PCIe, PCMCIA and new USB standards. What about memory encryption? It’s good but encryption alone is not enough. Controls must be implemented. The attack explained by Rodrigo is called “BRBC” or  “Blinded Random Block Corruption“. After giving the details, a nice demo was realized: how to become root on a locked system? Access to the memory is more easy in virtualized (or cloud) environments. Indeed many hypervisors allow enabling a “debug” feature per VM. Once activated, the administrator has write access to the memory. By using a debugger, you can use the BRBC attack and bypass the login procedure. The video demo was impressive.

So, TROPPER 10th anniversary edition is over. I spend four awesome days attending nice talks and meeting a lot of friends (old & new). I learned a lot and my todo-list already expanded.

 

[The post TROOPERS 2017 Day #4 Wrap-Up has been first published on /dev/random]



from Xavier

Thursday, March 23, 2017

5 Super easy ways to bump your computer security

super easy ways to bump your computer security at home

Protecting your computer or smartphone should come naturally because of the rising trend of cyber threats across the world. Over the internet, you’ll come across numerous ways to safeguard your digital life, but most of them talk about advanced methods to protect your computer that is only fit for a company or organization. What do you do when you are just an everyday user?

We have collected a list of super easy ways you can protect your computer if you are an everyday user of the internet.

Use private mode to browse the internet

  • Every browser comes with a private mode these days. Regardless of what internet browser you are using, if you are outside your home, never browse the internet without turning on the private mode. Private mode protects you by making sure your information is deleted once you have stopped using the browser.

Never ignore system updates

  • There’s a reason why your Windows keeps telling you to update your copy. OS companies release updates on nearly a daily basis to fight back hackers and other cyber criminals. By keeping your system up to date, you are protecting it by filling loopholes in OS security.

Back up your data every now and then

  • Backing up your data should become a habit for you. Even if you become a victim cyber crime and lose your data to hackers or cyber criminals, you can always restore a backup of your most recent data. There are tons of software out there that let you back up your entire computer to the cloud.

Don’t click on links in an email

  • Our emails usually start getting emails from different entities over the internet. Sometimes you’ll come across emails that have links inside. Never click on that link if the sender is not someone you trust. Hackers spread links on emails in hopes that you would click them and fill out their forms. Sometimes just merely clicking on the link and opening the page gives hackers enough time to inject malicious data into your computer.

Install anti-malware software

Your anti-virus is not enough to defend against threats on the internet. There’s a malicious entity known as malware that can easily bypass any anti-virus. Only a good anti-malware can protect you against these. Remember, hackers these days usually use malware to breach security, so make sure you have one installed on your computer or smartphone.

The post 5 Super easy ways to bump your computer security appeared first on Cyber Security Portal.



from Gilbertine Onfroi

TROOPERS 2017 Day #3 Wrap-Up

The third day is already over! Today the regular talks were scheduled split in three tracks: offensive, defensive and a specific one dedicated to SAP. The first slot at 09:00 was, as usual, a keynote. Enno Rey presented ten years of TROOPERS. What happened during all those editions? The main ideas behind TROOPERS have always been that everybody must learn something by attending the conference but… with fun and many interactions with other peers! The goal was to mix infosec people coming from different horizons. And, of course, to use the stuff learned to contribute back to the community. Things changed a lot during these ten years, some are better while others remain the same (or worse?). Enno reviewed all the keynotes presented and, for each of them, gave some comments – sometimes funny. The conference in itself also evolved with a SAP track, the Telco Sec Day, the NGI track and when they move to Heidelberg. Some famous vulnerabilities were covered like MS08-067 or the RSA hack. What we’ve seen:

  • A move from theory to practice
  • Some things/crap that stay the same (same shit, different day)
  • A growing importance of the socio-economic context around security.

Has progress been made? Enno reviewed infosec in three dimensions:

  • As a (scientific) discipline: From theory to practice. So yes, progress has been made
  • In enterprise environments: Some issues on endpoints have been fixed but there is a fact: Windows security has become much better but now they use Android :). Security in Datacenter also improved but now there is the cloud. 🙂
  • As a constituent for our society: Complexity is ever growing.

Are automated systems the solution? They are still technical and human factors that are important “Errare Humanum Est” said Enno. Information security is still in progress but we have to work for it. Again, the examples of the IoT crap was used. Education is key. So, yes, the TROOPERS motto is still valid: “Make the world a better place”. Based on the applause from the audience, this was a great keynote by an affected Enno!

I started my day within the defensive track. Veronica Valeros presented “Hunting Them All”. Why do we need hunting capabilities? A definition of threat hunting is “to help in spotting attacks that would pass our existing controls and make more damages to the business“.

Veronica's Daily Job: Hunting!

People are constantly hit by threats (spam, phishing, malware, trojans, RATs, … you name them). Being always online also increases our surface attack. Attacks are very lucrative and attract a lot of bad guys. Sometimes, malware may change things. A good example comes with the ransomware plague: it made people aware that backups are critical. Threat hunting is not easy because when you are sitting on your network, you don’t always know what to search. And malicious activity does not always rely on top-notch technologies. Attackers are not all ‘l33t’. They just want to bypass controls and make their malicious code run. To achieve this, they have a lot of time, they abuse the weakest link and they hide in plain sight. To resume: they use the “less effort rule”. Which sound legit, right? Veronica has access to a lot of data. Her team is performing hunting across hundreds of networks, millions of users and billions of web requests. How to process this? Machine learning came to the rescue. And Veronica’s job is to check and validate the output of the machine learning process they developed. But it’s not a magic tool that will solve all issues. The focus must be given on what’s important: from 10B of requests/day to 20K incidents/day using anomaly detection, trust modelling, event classification, entity & user modelling. Veronica gave an example. The botnet Sality is active since 2003 and still present. IOC’s exists but they generate a lot of false positives. Regular expressions are not flexible enough. Can we create algorithms to automatically track malicious behaviour. For some threats, it works, for others no. Veronica’s team is tracking +200 malicious behaviours and 60% is automated tracking. “Let the machine do the machine work”. As a good example, Veronica explained how referrers can be the source of important data leaks from corporate networks.

My next choice was “Securing Network Automation” by Ivan Peplnjak. In a previous talk, Ivan explained why Software Defined Networks failed but many vendors improved, which is good. So today, his new topic was about ways to improve the automation from a security perspective. Indeed, we must automate as much as possible but how to make it reliable and secure? If a process is well defined, it can be automated as said Ivan. Why automate? From a management perspective, the same reasons come always on the table: increase the flexibility while reducing costs, to have faster deployments and complete for public cloud offering. About the cloud, do we need to buy or to build? In all cases, you’ll have to build if you want to automate. The real challenge is to move quickly from development to test and production. To achieve this, instead of editing a device configuration live, create configuration text files, push them to a gitlab server. Then you can virtualise a lab, pull config and test them. Did it work? Then merge with the main branch. A lot can be automated: device provisioning, VLANs management, ACLs, firewall rules. But the challenge is to have strong controls to prevent issues upfront and troubleshoot if needed. A nice quote was:

“To make mistake is human, to automatically deploy mistake to all the servers use DevOps”

You remember the amazon bad story? Be prepared to face issues. To automate, you need tools and such tool must be secure. An example was given with Ansible. The issues are that it gathers information from untrusted source:

  • Scripts are executed on managed devices: what about data injection?
  • Custom scripts are included in data gathering: More data injection?
  • Returned data are not properly parsed: Risk of privilege escalation?

The usual controls to put in place are:

  • OOB management
  • Management network / VR
  • Limit access to the management hosts
  • SSH-based access
  • Use SSH keys
  • RBAC (commit scripts)

Keep in mind: Your network is critical so automatic (network programming) is too. Don’t write code yourself (hire a skilled Python programmer for this task) but you must know what the code should do. Test, test, test and once done, test again. As an example of control, you can perform a trace route before / after the change and compare the path. Ivan published a nice list of requirements for your vendor while looking for a new network device. If your current vendor cannot provide you basic requirements like an API, change it!

After the lunch, back to the defence & management track with “Vox Ex Machina” by Grame Neilson. The title looked interesting, was it more offensive of defensive content? Voice recognition is more and more used (example: Cortana, Siri, etc) but also on non-IT systems like banking or support system: “Press 1 for X or press 2 for Y”. But is it secure? Voice recognition is not a new hipe. There are references to the “Voder” already in 1939. Another system was the Vocoder a few years later. Voice recognition is based on two methods: phrase dependent or independent (the current talk will focus on the first method). The process is split in three phases:

  • Enrolment: your record a phrase x times. It must be different and the analysis is stored as a voice print.
  • Authentication: Based on feature extraction or MFCC (Mel-Frequency Cepstrum Correlation).
  • Confidence: Returned as a percentage.

The next part of the talk focused on the tool developed by Grame. Written in Python, it tests a remote API. The different supported attacks are: replay, brute-force and voice print fixation. An important remark made by Grame: Event if some services pretend it, your voice is NOT a key! Every time you pronounce “word”, the generated file is different. That’s why the process of brute-forcing is completely different with voice recognition: You know when you are getting closer due to the returned confidence  (in %) instead of a password comparison which returns “0” or “1”. The tool developed by Grame is available here (or will be soon after the conference).

The next talk was presented by Matt Grabber and Casey Smith: “Architecting a Modern Defense using Device Guard”. The talk was scheduled on the defensive track but it covered both worlds. The question that interest many people is: Is whitelisting a good solution? Bad guys are trying to find bypass strategies (red teams). What are the mitigations available for the blue teams? The attacker’s goal is clear: execute HIS code on YOUR computer. They are two types of attackers: the one who knows what controls you have in place (enlightened) and the novices who aren’t equipped to handle your controls (ex: the massive phishing campaigns dropping Office documents with malicious macros). Device Guard offers the following protections:

  • Prevents unauthorised code execution,
  • Restricted scripting environment
  • Prevents policy tempering and virtualisation based security

The speakers were honest: Device Guard does NOT protect against all the threats but it increases the noises (evidence). Bypasses are possible. how?

  • Policy misconfiguration
  • Misplaced trust
  • Enlightened scripting environments
  • Exploitation of vulnerable code
  • Implementation flaws

The way you deploy your policy depends on your environment is a key but also depends on the security eco-system where we are living. Would you trust all code signed by Google? Probably yes. Do you trust any certificate issued by Symantec? Probably not. The next part o the talk was a review of the different bypass techniques (offensive) and them some countermeasures (defensive). A nice demo was performed with Powershell to bypass the language constraint mode. Keep in mind that some allowed applications might be vulnerable. Do you remember the VirtualBox signed driver vulnerability? Besides those problems, Device Guard offers many advantages:

  • Uncomplicated deployment
  • DLL enforcement implicit
  • Supported across windows ecosystem
  • Core system component
  • Powershell integration

Conclusion: whitelisting is often a huge debate (pro/con). Despite the flaws, it forces the adversaries to reset their tactics. By doing this you disrupt the attackers’ economics: if it makes the system harder to compromise, it will cost the more time/money.

After the afternoon coffee break, I switched to the offensive track again to follow Florian Grunow and Niklaus Schuss who presented “Exploring North Korea’s Surveillance Technology”. I had no idea about the content of the talk but it was really interesting and an eye-opener! It’s a fact:  If it’s locked down, it must be interesting. That’s why Florian and Niklaus performed a research on the systems provided to DPRK citizens (“Democratic People’s Republic of Korea“). The research was based on papers published by others and devices / operating systems leaked. They never went over there. The motivation behind the research was to get a clear view of the surveillance and censorship put in place by the government. It started with the Linux distribution called “Red Star OS”. It is based on Fedora/KDE via multiple version and looks like a modern Linux distribution but… First finding: certificates installed in the browser are all coming from the Korean authorities. Also, some suspicious processes cannot be killed. Integrity checks are performed on system files and downloaded files are changed on the fly by the OS (example: files transferred via an USB storage). The OS adds a watermark at the end of the file which helps to identify the computer which was used. If the file is transferred to another computer, a second watermark is added, etc. This is a nice method to track dissidents and to build a graph of relations between them. Note that this watermark is added only on data files and that it can easily be removed. An antivirus is installed but can also be used to deleted files based on their hash. Of course, the AV update servers are maintained by the government. After the desktop OS, the speakers reviewed some “features” installed on the “Woolim” tablet. This device is based on Android and does not have any connectivity onboard. You must use specific USB dongle for this (provided by the government of course). When you try to open some files, you get a warning message “This is not signed file”. Indeed, the tablet can only work with files signed by the government or locally (based on RSA signatures). The goal, here again, is to prevent the distribution of media files. From a network perspective, there is no direct Internet access and all the traffic is routed through proxies. An interesting application running on the tablet is called “TraceViewer”. It takes a screenshot of the tablet at regular interval. The user cannot delete the screenshots and random physical controls can be performed by authorities to keep the pressure on the citizens. This talk was really an eye-opener for me. Really crazy stuff!

Finally, my last choice was another defensive track: “Arming Small Security Programs” by Matthew Domko. The idea is to generate a network baseline, exactly like we do for applications on Windows. For many organizations, the problem is to detect malicious activity on your network. Using an IDS becomes quickly unuseful due to the amount and the limitation of signatures. Matthew’s idea was to:

  • Build a baseline (all IPs, all ports)
  • Write snort rules
  • Monitor
  • Profit

To achieve this, he used the tool Bro. Bro is some kind of Swiss army knife for IDS environments. Matthew made a quick introduction to the tool and, more precisely, focussed on the scripting capabilities of Bro. Logs produced by Bro are also easy to parse. The tool developed by Matthew implements a simple baseline script. It collects all connections to IP addresses / ports and logs what is NOT know. The tool is called Bropy and should be available soon after the conference. A nice demo was performed. I really liked the idea behind this tool but it should be improved and features added to be used on big environments. I would recommend having a look at it if you need to build a network activity baseline!

The day ended with the classic social event. Local food, drinks and nice conversations with friends, which is priceless. I have to apologize for the delay to publish this wrap-up. Complaints can be sent to Sn0rkY! 😉

[The post TROOPERS 2017 Day #3 Wrap-Up has been first published on /dev/random]



from Xavier

"Time for Password Expiration to Die"

Editor's Note: This is based on a post I did to the SANS GIAC maillist. I've been meaning to blog about password expirationsand this was the kick in the butt I needed. This is also the perfect example of the saying - "amateurs mitigate risk, professionals manage risk ." Per Thorsheim, Cormac Herley, I and … Continue reading Time for Password Expiration to Die

from lspitzner

Wednesday, March 22, 2017

A new best practice to protect technology supply chain integrity

This post is authored by Mark Estberg, Senior Director, Trustworthy Computing. 

The success of digital transformation ultimately relies on trust in the security and integrity of information and communications technology (ICT). As ICT systems become more critical to economic prosperity, governments and organizations around the world are increasingly concerned about threats to the technology supply chain. These concerns stem from fear that an adversary might tamper with or manipulate products during development, manufacture, or delivery. This poses a challenge to the technology industry: If our products are to be fully trusted, we must be able to provide assurance to our customers that the technology they reviewed and approved before deployment is the same software that is running on their computers.

To increase confidence, organizations have increasingly turned to source code analysis through direct inspection of the supply chain by a human expert or an automated tool. Source code is a set of computer instructions written in a programming language that humans can read. This code is converted (or compiled) into a binary file of instructions—a language of zeroes and ones that machines can process and execute, or executable. This conversion of human-readable code to machine-readable code, however, raises the unsettling question of whether the machine code—and ultimately the software program running on computers—was built from the same source code files that the expert or tool analyzed. There has been no efficient and reliable method to answer this, even for open source software. Until now.

At Microsoft, we have developed a way to definitively demonstrate that a compiled machine-readable executable was generated from the same human-readable source code that was reviewed. It’s based on the concept of a “birth certificate” for binary files, which consists of unique numbers (or hash values) that are cryptographically strong enough to identify individual source code files.

As source code is compiled in Visual Studio, the compiler assigns the source code a hash value generated in such a way that it is virtually impossible that any other code will produce the same hash value. By matching hash values from the compiler to those generated from the examined source code files, we can verify that the executable code did indeed result from the original source code files.

This method is described in more detail in Hashing Source Code Files with Visual Studio to Assure File Integrity. The paper gives a full description of the new Visual Studio switch for choosing a hashing algorithm, suggested scenarios where such hashes might prove useful, and how to use Visual Studio to generate these source code hashes.

Microsoft believes that the technology industry must do more to assure its stakeholders of the integrity of software and the digital supply chain. Our work on hashing is both a way to help our customers and a way to further how the industry is addressing this growing problem:

  • This source file hashing can be employed when building C, C++, and C# executable programs in Visual Studio.
  • Technology providers can use unique hash value identifiers in their own software development for tracking, processing, and controlling source code files that definitively demonstrate a strong linkage to the specific executable files.
  • Standards organizations can include in their best practices the requirement to take this very specific and powerful step toward authenticity.

We believe that capabilities such as binary source file hashing are necessary to establish adequate trust to fulfill the potential of digital transformation. Microsoft is committed to building trust in the technology supply chain and will continue to innovate with our customers, partners and other industry stakeholders.

Practical applications of digital birth certificates

There are many practical applications for our binary source file hashing capability, including these:

  • Greater assurance through automated scanning. As an automated analysis tool scans the source code files, it can also generate a hash value for each of the files being scanned. Matching hash values from the compiler with hash values generated by the analysis not only definitively demonstrates that they were compiled into the executable code, but that the source code files were scanned with the approved tool.
  • Improved efficiency in identifying vulnerabilities. If a vulnerability is identified in a source file, the hash value of the source file can be used to search among the birth certificates of all the executable programs to identify programs likely to include the same vulnerability.

To learn more about evolving threats to the ICT supply chain, best practices, and Microsoft’s strategy, check out our webinar, Supply Chain Security: A Framework for Managing Risk.



from Microsoft Secure Blog Staff

Tuesday, March 21, 2017

TROOPERS 2017 Day #2 Wrap-Up

This is my wrap-up for the 2nd day of “NGI” at TROOPERS. My first choice for today was “Authenticate like a boss” by Pete Herzog. This talk was less technical than expected but interesting. It focussed on a complex problem: Identification. It’s not only relevant for users but for anything (a file, an IP address, an application, …). Pete started by providing a definition. Authentication is based on identification and authorisation. But identification can be easy to fake. A classic example is the hijacking of a domain name by sending a fax with a fake ID to the registrar – yes, some of them are still using fax machines! Identification is used at any time to ensure the identity of somebody to give access to something. It’s not only based on credentials or a certificate.

Identification is extremely important. You have to distinguish the good and bad at any time. Not only people but files, IOC’s, threat intelligence actors, etc. For files, metadata can help to identify. Another example reported by Pete: the attribution of an attack. We cannot be 100% confident about the person or the group behind the attack.The next generation Internet needs more and more identification. Especially with all those IoT devices deployed everywhere. We don’t even know what the device is doing. Often, the identification process is not successful. How many times did you send a “hello” to somebody that was not the right person on the street or while driving? Why? Because we (as well as objects) are changing. We are getting older, wearing glasses, etc…  Every interaction you have in a process increases your attack surface the same amount as one vulnerability.  What is more secure? Let a user choose his password or generate a strong one for him? He’ll not remember ours and write it down somewhere. In the same way, what’s best? a password or a certificate? An important concept explained by Pete is the “intent”. The problem is to have a good idea of the intent (from 0 – none – to 100% – certain).

Example: If an attacker is filling your firewall state table, is it a DoS attack? If somebody is performed a traceroute to your IP addresses, is it a foot-printing? Can be a port scan automatically categorized as hunting? And a vulnerability scan will be immediately followed by an attempt to exploit? Not always… It’s difficult to predict specific action. To conclude, Pete mentioned machine learning as a tool that may help in the indicators of intent.

After an expected coffee break, I switched to the second track to follow “Introduction to Automotive ECU Research” by Dieter Spaar. ECU stands for “Electronic Control Unit”. It’s some kind of brain present in modern cars that helps to control the car behaviour and all its options. The idea of the research came after the problem that BMW faced with the unlock of their cars. Dieter’s Motivations were multiple: engine tuning, speedometer manipulation, ECU repair, information privacy (what data are stored by a car?), the “VW scandal” and eCall (Emergency calls). Sometimes, some features are just a question of ECU configuration. They are present but not activated. Also, from a privacy point of view, what infotainment systems collect from your paired phone? How much data is kept by your GPS? ECU’s depend on the car model and options. In the picture below, yellow  blocks are ECU activated, others (grey) are optional (this picture is taken from an Audi A3 schema):

Audi A3 ECU

Interaction with the ECU is performed via a bus. They are different bus systems: the most known is CAN (Controller Area Network), MOST (Media Oriented System Transport), Flexray, LIN (Local Interconnected Network), Ethernet or BroadR-Reach. Interesting fact, some BMW cars have an Ethernet port to speed up the upgrades of the infotainment (like GPS maps). Ethernet provides more bandwidth to upload big files. ECU hardware is based on some typical microcontrollers like Renesas, Freescale or Infineon. Infotainment systems are running on ARM sometimes x86. QNX, Linux or Android. A special requirement is to provide a fast response time after power on. Dieter showed a lot of pictures with ECU where you can easily identify main components (Radio, infotainment, telematics, etc). Many of them are manufactured by Peiker. This was a very quick introduction but this demonstrated that they are still space for plenty of research projects with cars. During the lunch break, I had an interesting chat with two people working at Audi. Security is clearly a hot topic for car manufacturers today!

For the next talk, I switched again to the other track and attended “PUF ’n’ Stuf” by Jacob Torrey & Anders Fogh. The idea behind this strange title was “Getting the most of the digital world through physical identities”. The title came from a US TV show popular in the 60’s. Today, within our ultra-connected digital world, we are moving our identity from a physical world and it becomes difficult to authenticated somebody. We are losing the “physical” aspect. Humans can quickly spot an imposter just by having a look at a picture and after a simple conversation. Even if you don’t personally know the person. But to authenticate people via a simple login/password pair, it becomes difficult in the digital world. The idea of Jacob & Anders was to bring a strong physical identification in the digital world. The concept is called “PUF” or “Physically Uncloneable Function“. To achieve this, they explained how to implement a challenge-response function for devices that should return responses as non-volatile as possible. This can be used to attest the execution state or generate device-specific data. They reviewed examples based on SRAM, EEPROM or CMOS/CCD. The latest example is interesting. The technique is called PRNU and can be used to uniquely identify image sensors. This is often used in forensic investigation to link a picture to a camera. You can see this PUF as a dual-factor authentication. But there are caveats like a lack of proper entropy or PUF spoofing. Interesting idea but no easy to implement in practical cases.

After the lunch, Stefan Kiese had a two-hours slot to present “The Hardware Striptease Club”. The idea of the presentation was to briefly introduce some components that we can find today in our smart houses and see how to break them from a physical point of view. Stefan briefly explained the methodology to approach those devices. When you do this, never forget the impact (loss of revenue, theft of credentials, etc… or worse life (pacemakers, cars). Some reviewed victims:

  • TP-Link NC250 (Smart home camera)
  • Netatmo weather station
  • BaseTech door camera
  • eQ-3 home control access point
  • Easy home wifi adapter
  • Netatmo Welcome

It made an electronic crash course but also insisted on the risks to play with electricity powered devices! Then, people were able to open and disassemble the devices to play with them.

I didn’t attend the second hour because another talk looked interesting: “Metasploit hardware bridge hacking” by Craig Smith. He is working at Rapid7 and is playing with all “moving” things from cars to drones. To interact with those devices, a lot of tools and gadgets are required. The idea was to extend the Metasploit framework to be able to pentest these new targets. With an estimation of 20.8 billions of IoT devices connected (source: Gartner), pentesting projects around IoT devices will be more and more frequent. Many tools are required to test IoT devices: RF Transmitters, USB fuzzers, RFID cloners, JTAG devices, CAN bus tools, etc. The philosophy behind Metasploit remains the same: based on modules (some exploits, some payload, some shellcodes). New modules are available to access relays which talk directly to the hardware module. Example:

msf> use auxililary/server/local_hwbridge

A Metasploit relay is a lightweight HTTP server that just makes JSON translations between the bridge and Metasploit.

Example: ELM327 diagnostic module can be used via serial USB or BT. Once connected all the classic framework features are available as usual:

./tools/hardware/elm327_relay.rb

Other supported relays are RF transmitter or Zigbee. This was an interesting presentation.

For the last time slot, there was two talks: one about vulnerabilities in TP-Link devices and one presented as “Looking through the web of pages to the Internet of Things“. I chose the second one presented by Gabriel Weaver. The abstract did not describe properly the topic (or I did not understand it) but the presentation was a review of the research performed by Gabriel: “CTPL” or “Cyber Physical Topology Language“.

That’s close the 2nd day. Tomorrow will be dedicated to the regular tracks. Stay tuned for more coverage.

[The post TROOPERS 2017 Day #2 Wrap-Up has been first published on /dev/random]



from Xavier

3 ways to outsmart attackers by using their own playbook

This blog post was authored by Andrej Budja, Frank Brinkmann, Heath Aubin, Jon Sabberton and Jörg Finkeisen from the Cybersecurity Protection Team, part of the Enterprise Cybersecurity Group.

The security landscape has changed.

Attackers often know more about the target network and all the ways they can compromise an organization than the targeted organization itself. As John Lambert writes in his blog, “Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win”.

Attackers do think in graphs. Unfortunately, most organizations still think in lists and apply defenses based on asset value, rather than the security relationships between the assets.

So, what can you do to level the playing field? Use the attackers’ playbook against them!

Get ahead by creating your own graph

Start by reading John Lambert’s blog post, then do what attackers do – graph your network. At Microsoft, we are using graphs to identify potential attack paths on our assets by visualizing key assets and security relationships.

While we have not published our internal tools (you can find some similar open source tools on the Internet), we have created a special cybersecurity engagement delivered by our global Microsoft Services team, called Active Directory Hardening (ADH).

The ADH offer uses our tools to help discover and analyze privileged account exposure and provide transition assistance for deviations from the privileged administration recommendations used at Microsoft. The ADH provides assistance by reducing the number of highly privileged Active Directory (AD) administrative accounts and transitioning them into a recommended AD administration model.

Break connections in your graph

Once you have the graph for your AD accounts, you will notice clusters as well as the different paths attackers can use to move laterally on your network. You will want to implement security controls to close those paths. One of the most effective ways to reduce the number of paths is by reducing the number of administrators (this includes users that are local administrators on their workstations) and by using dedicated, hardened workstations for all privileged users – we call these Privileged Access Workstations (PAWs).

These PAWs are deployed from a clean source and make use of modern security controls available in Windows 10. Because PAWs are not used as general purpose workstations (no email and Internet browsing allowed), they provide high security assurances for sensitive accounts and block popular attack techniques. PAWs are recommended for administration of identity systems, cloud services, and private cloud fabric as well as sensitive business functions.

You can develop and deploy PAWs on your own by following our online guide, or you can engage Microsoft Services to help accelerate your adoption of PAWs using our standard PAW offering.

Bolster your defenses

PAWs provide excellent protection for your privileged users. However, they are less effective when your highest privileged accounts (Domain Administrators and Enterprise Administrators) have already been compromised. In this situation, you need to provide Domain Administrators a new, clean, and trusted environment from which they can regain control of the compromised network.

Enhanced Security Administrative Environment (ESAE) builds upon guidance and security controls from PAWs and adds additional controls by hosting highly-privileged accounts and workstations in a dedicated administrative forest. This new, minimal AD forest provides stronger security controls that are not possible in the production environment with PAWs. These controls are used to protect your most privileged production domain accounts. For more information about the ESAE administrative forest and security concepts, please read ESAE Administrative Forest Design Approach.

Conclusion

“If you know your enemy and know yourself you need not to fear the results of hundreds of battles”, Sun Tzu, Chinese general, military strategist, 6th Century BCE.

Protecting your valuable assets against sophisticated adversaries is challenging, but it can be made easier by learning from attackers and using their playbook. Our teams are working daily on the latest cybersecurity challenges and sharing our knowledge and experience. Discover more information in the following resources:

About the Cybersecurity Protection Team

Microsoft invests more than a billion dollars each year to build security into our products and services. One of the investments is the global Enterprise Cybersecurity Group (ECG) which consists of cybersecurity experts helping organizations to confidently move to the cloud and modernize their enterprises.

The Cybersecurity Protection Team (CPT) is part of ECG, and is a global team of Cybersecurity Architects that develops, pilots, and maintains cybersecurity offerings that protect your critical assets. The team works closely with other Microsoft teams, product groups, and customers to develop guidance and services that help protect your assets.



from Microsoft Secure Blog Staff