DevOps, are you really protected against unauthorized access?

   

This blog was edited on November 12, 2018

Last month’s exposure of an authentication bypass in libssh library, and especially all the media coverage it received, showed how even “not the most popular SSH server library” still has thousands of servers exposed directly to the internet. This got me thinking: what makes someone expose an internal server (SSH, HTTPS, RDP or anything else) directly to the internet and count on the application-layer authentication of this server to protect from unauthorized access? Is it the naïve lack of awareness to security risks? Is it the blind belief in the goodness of the universe? Is it the “it won’t happen to me” attitude?

When introduced Luminate Secure Access Cloud ™ service, a couple of years ago, we decided to experiment a little. We intentionally left infrastructure with ports 80, 443 and 22 exposed to the internet in order to measure the amount of break-in attempts . We quickly noted thousands of attempts over a very short period of time. All kinds of pursuits passed: from simple scripts trying out all possible variations of default user/password combinations, to more sophisticated scanning attempts. It wasn’t hard to see it. It just happened, minutes after we exposed the ports to the internet.

It is not the first time this year that I read a security research pointing at some internal server or web portal that had been left wide-open for access over the internet. Recently, a comprehensive study has been released to the public, showing the following horrifying numbers:

  • 22,672 Admin Dashboards were found (using a combination of scanning technologies) that had been left open to the internet.
  • ~300 of them were not even password-protected.
  • 95% of these were hosted on Amazon Web Services, and the rest were spread over a long tail of Cloud IaaS/Hosting Providers, including Microsoft Azure, Alibaba Cloud, Digital Ocean and others.

If we were dealing with smaller numbers, I wouldn’t have reacted so harshly to these findings. Anyone could have a ‘sandbox’ environment where they’d ‘play’ with some technologies, and it wouldn’t be the worst thing in the world to leave it unprotected. In this case, though, we shouldn’t make this mistake. These are not sandbox / playground environments.

Earlier this year, research from RedLock CSI Team uncovered an attack on an environment belonging to Tesla Inc.. The attack started when a Kubernetes Console that wasn’t password protected was infiltrated. This lead to the uncovering of AWS S3 buckets credentials which contained sensitive information. This is just one case that proves that the danger of an attack is real, and can happen to any company, large or small. The same research team earlier reported finding unprotected exposed Kubernetes consoles belonging to large enterprises, such as Aviva, a multi-national insurance company and Gemalto, a technological vendor.

DevOps, this one is on you

Analyzing the data in the report, one can clearly see that all the discovered administrative consoles were deployed by R&D and DevOps teams: more than 76% of them were Kubernetes Web Consoles, 19% were Docker Swarm Web UIs, and a considerable number were of Swagger Web UIs. None of these are tools that are used outside of the R&D and Operations teams.

Download our guide:  5 tips to make your security roadmap flexible

Why does this happen?

How can people who use such advanced technologies have absolutely no awareness of security and make such basic mistakes?

Well, in my opinion, it is the combination of the ease of creating new infrastructure in AWS (and other IaaS CSPs) and the complexity of deploying security solutions. In today’s reality, quite a lot of corporate connectivity happens over the internet, with companies all over the world adopting more and more SaaS solutions. This clearly creates a dissonance between the SaaS and the IaaS-based application access as when accessing self-hosted apps, all of a sudden we need to use complex Remote Access/VPN tools, or work with Just-in-Time access (hoping that it doesn’t open something that is too wide). The temptation of “just accessing it” is way too big to resist.

While many companies, particularly players in the Cloud Workload Protection Platforms field (CWPP) are evangelizing about using tools that can monitor the environments and suggest hardening them, I would like to propose adopting a different approach.

Why don’t we start with a simple “default deny” state of mind?

We could forget to deploy a monitoring/alerting solution. We could miss hardening CSP environment configurations. It is very simple to spin off every environment without allowing any inbound access ‘to begin with’, this is something relatively easy to remember and even easier to enforce with any Infrastructure-as-a-Code or Configuration Automation template.

Creating an infrastructure without the ability to connect  is scary. What would we do if something went wrong? How would we connect to it and understand what was going on or manage the infrastructure we just created? Sometimes, the right solution can also be the easiest solution to deploy.

Say ‘Yes’ to Zero Trust

Zero Trust Access is based on principles originally developed by the Department of Defense for its “need to know” model. In a nutshell, it is based on the following list of fundamental assumptions:

  • No workload or service is ever exposed publicly: no open ports on publicly routable IPs. This is sometimes defined as “cloaking” – your infrastructure is not visible to any external party.
  • In order to get any kind of access, the accessing party needs to undergo authentication, and, where applicable, verification of the security posture of the endpoint device – this principle is sometimes referred to as “first authenticate, then connect”.
  • Lastly, Zero Trust Access never provides access to the network, it only allows use of the specific services that the accessing party is authorized to use and absolutely nothing more.

Building a Zero Trust Access architecture used to be a complex task, but it is no longer such.

At Luminate, we deal with this challenge with the Secure Access Cloud ™. It is super-easy to deploy and it provides a cloud-native “default-deny” connectivity to all IaaS-based resources without deploying any special virtual appliances or endpoint agents. And the best of all – you can fully integrate it with your cloud automation and gain the transparent ability to connect to your infrastructure the moment it is created, while completely avoiding the risks associated with providing unauthorized access to sensitive environments.

Download our guide:  5 tips to make your security roadmap flexible


Leonid Belkind, CTO & co-founder

Written by Leonid Belkind, CTO & co-founder

Leonid Belkind is leading the technology vision for Luminate’s products. Prior to co-founding Luminate, Leonid worked for more than 14 years for Check Point Software Technologies, the leading vendor of IT network security products, where he built generations of IT security products, from early concept to delivery to Fortune 100 customers. He also established large international product delivery organizations of 150+ employees. Leonid is a graduate of Israel’s elite military technology unit, 8200.