Digital Security Strategy, Part 2: Rising Into The Clouds
Roman Faynberg - Fri, 25 May 2018
transform.jpg

Our customers who embark on a "digital transformation" journey frequently face a dilemma as they inevitably consider the cloud as the target platform for their new product. When they need to outsource computing to the cloud, how do they do it? Do they follow the "lift-and-shift" strategy by effectively replicating a regular data center, except in the cloud? Or do they embrace the capabilities that the cloud offers and design the deployment accordingly? In this post of the Digital Transformation series, we discuss the common security problems that companies face as they move computing into the cloud.

The "lift-and-shift" approach is usually looked at as the easiest of the two options, but in reality, standard on-prem deployments have some inherent security issues: a hard outer perimeter shell with a fairly soft, trusting inside. So, moving this style of deployment into the cloud will cause the same issues, except with a potentially higher exposure.

The alternative is, of course, adopting the new, and different, security controls that the cloud provides. But that does not come without a cost. While the capabilities of the cloud provide the long-term benefit of consistency via "API-based infrastructure" that can be spun up or down at a push of a button, this of course requires a major re-engineering of the infrastructure and a large effort upfront.

The business drivers, such as product release deadlines, usually require that this transformation is accomplished quickly despite the large effort. But the security impact of such transformation is what's known as the "Shadow IT": assets that have been deployed without proper approvals, governance, and oversight. They are by definition a security liability, but set up in the cloud, they multiply the organization's risk. Visibility into cloud deployments is a major challenge for most organizations.

Let's take the example of AWS. As of mid-2017, AWS was offering 98 (ninety-eight) distinct services, and it is only logical that when a new solution using such multitude of technologies is deployed, configuration mistakes will be made. But the sheer scale of the deployment might make it very difficult to catch these mistakes. With that many services, it's likely that it will be someone's first time deploying a service - and that will make the mistakes a lot more likely.

And even if you do catch a security hole in time, what happens tomorrow? Or a month from now? How can one make sure that a newly made configuration change didn't create a new security hole?

These are all hard questions that do not have easy answers. But they make us consider the topic of figuring out the attack surface in the cloud. In the days of yore, one would at least have known, contiguous IP blocks that could be port-scanned for various services. Now, enumerating the IP-based presence becomes a different beast. It might just be too difficult to do it in the cloud. With resources not tied to a specific IP addressing scheme, you can easily have, say, an S3 bucket that you forget about but that your application points to.

In our "digital transformation" engagements, Carve has been working with customers on finding ways to not only enumerate this attack surface and take a snapshot, but track the changes as time goes by. This is a work in progress, but I think it is useful to identify some quick and easy ways to start gaining an understanding of what currently presents a major risk in your cloud. For the purposes of this post, I'll list the most common "gotchas" with AWS, which currently has a plurality of the market with a ~47% share:

  • Unprotected S3 buckets. Unauthorized reading or writing to it is effectively the same as being able to read or write from a hard-drive, on which your application lives.

  • EC2 instances with unaccounted for public IP interfaces.

  • Open security groups left over from development or testing stages. Combined with the previous item, this may expose an internal service to the public Internet.

  • Application endpoints that are exposed to the outside world without sufficient protection (case in point: an Elastic Beanstalk endpoint protected by AWS API Gateway logically - but nonetheless directly addressable from the Internet)

  • Lack of multi-factor authentication on sensitive accounts

There are of course many more, but the above are some of the more common issues.

To get a high-level idea of how your deployment fares, and to identify major problems, one can use readily available tools to have a 30,000 ft "security view", such as the very useful Scout2 by NCC Group, or AWS's own Trusted Advisor tool. You can take the do-it-yourself approach also: the AWS CLI client makes it easy to query the deployment. We at Carve have also been working on putting together our own framework that would only extract the most relevant information from the AWS deployment using the CLI and the AWS SDKs, including common misconfigurations on some of the newer services.

It's important to keep in mind that any "snapshot" of the current state is just that: a point-in-time assessment that may become obsolete the next day. The environments are usually fluid, so it is important to focus on identifying meaningful differences as soon as possible to avoid lingering security issues, and ensure continued visibility into the environment.