Today I’d like to introduce gcptool, a Google Cloud Platform (GCP) focused tool to assist with auditing a cloud configuration. Here at Carve, we frequently test the security of our customer’s cloud configurations throughout the course of our projects. Most commonly, this includes AWS and Azure-based deployments, but we’ve been seeing increasing usage of Google Cloud over the past several years. Over the course of these projects, we’ve started developing our own tooling to assist in performing these audits. We’ve developed gcptool to cater to our specific needs for cloud configuration reviews: automation of commonly repeated checks, easy extensibility, and repeatability. We’re making the tool available to you today in the hopes that our tool aligns with your needs, and that it’ll be useful for improving the security of projects in your GCP organization.

Over several years of Google Cloud environment assessments, we noticed a large number of simple, repetitive issues pop up across many of the environments. The nature of these findings lend themselves well to the “Carve approach:” automate away as much as possible of easy, repeated tasks. By doing so, more time and energy can be spent manually reviewing more interesting and more complicated portions of the infrastructure; we use this tooling to help speed with speed and thoroughness, but manual testing and confirmation still forms the core of our work.

Our ultimate goal in developing gcptool is to improve our thoroughness, consistency, and productivity on Google Cloud assessments. Our approach to achieving these goals was to make it easy to write additional audit checks, to capture the state of a cloud environment at a particular point of time, and to integrate tightly with our project flow and other internal tooling. Writing a new check only requires writing a single Python function and writing a template describing the issue and suggested remediation steps. The thin GCP API wrapper in use allows for both IDE autocompletion of appropriate fields and consultation with Google’s documentation for each service. Taking advantage of these, our goal is to automate the discovery of any new or unique type of issue we find so as to save time on future assessments and to serve as a knowledge base for the types of issues we regularly encounter.

 You might ask why we’ve spent the time to develop our own tool when there are already a number of existing tools aimed at helping secure Google Cloud environments. The most comparable tool and the industry standard is Scout Suite, which provides similar auditing capabilities and support for all of the major cloud providers. At the time we began writing gcptool for our internal use, Scout Suite’s support for Google Cloud was rather sparse; since then, Scout’s support for Google Cloud has improved, and now both tools are comparable. Another common Google Cloud security tool is Forseti, which is aimed at providing administrators with continual monitoring of their projects, which is useful but does not align with our needs during assessments. We chose to write our own tool rather than work to improve Scout Suite or Forseti in order to gain a deeper understanding of Google Cloud’s APIs and to have a tool tailored to our specific needs.

Today, you can install the gcptool framework and a subset of scans we’ve written from the Python package index. Following the quick-start instructions in the README file, successfully scanning your Google Cloud projects should be relatively easy. As a result of running a scan, you’ll receive two outputs: first, a list of findings that were applicable to your projects in Markdown format. Though these files are written in a format designed for use by our Pandoc-based report generation framework, they are easily human readable. Each of these findings will contain a complete listing of affected resources, an explanation of the issue, as well as suggested remediation steps. There are many included finding types, but I’ll give a few common examples of findings that gcptool is able to report. First, we often see Compute Engine VMs running with an excessive level of permissions; by default, they run as a user with the Editor role, which allows read/write permissions to the project they’re contained in. From the default configuration, only a single configuration change when creating a new node is required to allow that full power to be used. Such a misconfiguration can be particularly dangerous when the affected Compute Engine VM is a node in a Kubernetes Engine cluster, as the node’s access can also extend to low-privileged containers running within. Second, the external surface area from resources including Cloud Storage buckets and other network services exposed to the public are often larger than intended. Our tool is able to output lists of Cloud Storage buckets that are world-readable or writable, all Compute Engine VMs that have firewall rules allowing access from the internet, and Cloud Functions that are triggerable anonymously from the internet. Reviewing each of these lists can help keep the exposed surface area to a minimum, and find any accidentally-public storage data.