Where is your organization really in your zero trust journey, and how much further do you have to do? Implementing a true zero trust architecture is more aspirational than achievable.
I interviewed 57 security leaders and asked them "What sucks in security?" Their top pain points were inconsistent access management, vulnerability prioritization and remediation, and obtaining SaaS logs in case of an incident.
Rather than a security tool alerting the security team (in Slack), who then needs to find the right person to ping (also in Slack) — what if the tool just short circuited that and went right to the source (in Slack, of course)?
Good news everyone, Tailscale is SOC 2 compliant! Our Type II audit validates that our security controls were effective over the period of time evaluated and that we're actually implementing the policies and procedures we committed to. Continue reading to learn about some of the challenges we faced with our audit, open source tools we'd like to share, and how we think our SOC 2 compliance efforts can be improved.
No organization has successfully implemented a fully zero trust architecture. Many proponents of zero trust, including the US government, have ignored devices as a key component of BeyondCorp.
Developers often want to do the 'right' thing when it comes to security, but they don't always know what that is. In order to help developers continue to move quickly, while achieving better security outcomes, organizations are turning to DevSecOps. For dependencies your code pulls in as part of your software supply chain, what should you do?
Last month, GitHub Supply Chain Security Product Manager Maya Kaczorowski explained what DevSecOps is and security best practices for development teams. We had some follow up questions, so we asked her back for a lightning Q&A—a quick deep dive on why DevSecOps matters for developers, and how to apply it to the developer workflow.
Today, open source is everywhere—in almost all proprietary codebases and community projects. For organizations, the question isn't if you are or aren't using open source code. It's what open source code you're using, and how much. If you aren't aware of what's in your software supply chain, an upstream vulnerability in one of your dependencies can affect your application, making you susceptible to a potential compromise. In this post, we'll dig into what the term “software supply chain security” means, why it matters, and how you can help secure your project's supply chain.
DevSecOps, shifting left, and GitOps: you've probably heard all of these terms recently, but you might not be sure about what they mean. The reality is that these practices share a lot of the same principles—to reduce the time developers need to spend on security, while achieving better outcomes. And who doesn't want that? Let's clear up some confusion and deconstruct what these terms mean, and how they apply to your security and development teams.
With the accelerated use of open source, your project likely depends on hundreds of dependencies—203 package dependencies per repository on average, to be exact. How can you actually tell what dependencies your application has? Let's dive in to better understand what dependencies are, how to use the GitHub dependency graph to see their impact on your code, and what you should be doing to maintain them.
How many of us have ignored a software update, indefinitely clicking “Remind me later” and never quite getting around to it? Unfortunately, it's pretty common. Delaying dependency version updates doesn't mean the update goes away, unfortunately. If an update doesn't include a security patch, though, is it still okay to delay it? In fact, even when updates don't address known vulnerabilities, regularly updating your dependencies improves your security.
If you're serious about the security of your Kubernetes operating environment, you need to build on a strong foundation. The Center for Internet Security's (CIS) Kubernetes Benchmark give you just that: a set of Kubernetes security best practices that will help you build an operating environment that meets the approval of both regulators and customers. We've released in conjunction with CIS, a new CIS Google Kubernetes Engine (GKE) Benchmark, available under the CIS Kubernetes Benchmark, which takes the guesswork out of figuring out which CIS Benchmark recommendations you need to implement, and which ones Google Cloud handles as part of the GKE shared responsibility model.
At Google, we care deeply about the security of open-source projects, as they're such a critical part of our infrastructure—and indeed everyone's. Today, the Cloud-Native Computing Foundation (CNCF) announced a new bug bounty program for Kubernetes that we helped create and get up and running. Here's a brief overview of the program, other ways we help secure open-source projects and information on how you can get involved.
Today, the Kubernetes Product Security Committee is launching a new bug bounty program, funded by the CNCF, to reward researchers finding security vulnerabilities in Kubernetes.
As GKE moved from v1.12 to v1.15 over the past year, here's an overview of what security changes we've made to the platform, to improve security behind the scenes, and with stronger defaults, as well as advice we added to the GKE hardening guide.
Google's cloud-native architecture was developed prioritizing security as part of every evolution in our architecture. Today, we're introducing a whitepaper about BeyondProd, which explains the model for how we implement cloud-native security at Google. As many organizations seek to adopt cloud-native architectures, we hope security teams can learn how Google has been securing its own architecture, and simplify their adoption of a similar security model.
As you get ready to install and configure your Kubernetes environment (on so-called day one), here are some security questions to ask yourself, to help guide your thinking.
Today, we're releasing two features to help you protect and control your GKE environment and support regulatory requirements: the general availability of GKE application-layer Secrets encryption, so you can protect your Kubernetes Secrets with envelope encryption; and customer-managed encryption keys (CMEK) for GKE persistent disks in beta, giving you more control over encryption of persistent disks.
Kubernetes reached an important milestone recently: the publication of its first-ever security audit! Sponsored by the Cloud Native Computing Foundation (CNCF), this security audit reinforces what has been apparent to us for some time now: Kubernetes is a mature open-source project for organizations to use as their infrastructure foundation.
We've been hard at work to make it easier for you to ensure security as you develop, build, deploy, and run containers, with new products and features in Google Kubernetes Engine and across Google Cloud. Here's what we recently announced at Next '19, and how you can use these for your container deployments—so there's less cryptojacking, and more time for whale watching, as it were.
Security in the cloud is a shared responsibility between the cloud provider and the customer. Google Cloud is committed to doing its part to protect the underlying infrastructure, like encryption at rest by default, and in providing capabilities you can use to protect your workloads, like access controls in Cloud Identity and Access Management (IAM). As newer infrastructure models emerge, though, it's not always easy to figure out what you're responsible for versus what's the responsibility of the provider. In this blog post, we aim to clarify for Google Kubernetes Engine (GKE) what we do and don't do—and where to look for resources to lock down the rest.
Dev, ops, and security teams all want their workloads to be more secure (and make those pesky containers actually “contain”!); the challenge is making those teams more connected to bring container security to everyone. The theme of the 2019 Container Security Summit was just that: “More contained. More secure. More connected.” Here are four topics that led the day at the summit.
At Google Cloud, we care deeply about protecting your data. That's why we encrypt data at rest by default, including data in Google Kubernetes Engine (GKE). For Kubernetes secrets—small bits of data your application needs at build or runtime—your threat model might be different, so storage-layer encryption is insufficient. Today, we're excited to announce in beta GKE application-layer secrets encryption, using the same keys you manage in our hosted Cloud Key Management Service (KMS).
At Google Cloud, we've long maintained base images as part of the infrastructure that powers hosted services such as Google App Engine. With managed base images, we'll provide base images for these common OSes, and patch them automatically.
Adopting containers and container orchestration tools like Kubernetes can be intimidating to anyone, but if you're on the security team, it can feel like yet another technology that you're now responsible for securing. We talk a lot about how to secure containers and avoid common containers security pitfalls, but did you know that you can use containers to improve your overall security posture?
Earlier this year at KubeCon in Copenhagen, the message from the community was resoundingly clear: “this year, it's about security”. If Kubernetes was to move into the enterprise, there were real security challenges that needed to be addressed. Six months later, at this week's KubeCon in Seattle, we're happy to report that the community has largely answered that call. In general, Kubernetes has made huge security strides this year, and giant strides on Google Cloud. Let's take a look at what changed this year for Kubernetes security.
While containers bring great benefits to your development pipeline and provide some resource separation, they were not designed to provide a strong security boundary. With that said, let's take a look at what kind of security isolation containers do provide, and, in the event that it's not enough, where to look for stronger isolation.
Today, we're excited to announce that you'll soon be able to manage security alerts for your clusters in Cloud Security Command Center (Cloud SCC), a central place on Google Cloud Platform (GCP) to unify, analyze and view security data across your organization. Further, even though we just announced Cloud SCC a few weeks ago, already five container security companies have integrated their tools with Cloud SCC to help you better secure the containers you're running on Google Kubernetes Engine.
This is the first in a series of blog posts that will cover container security on Google Cloud Platform (GCP), and how we help you secure your containers running in Google Kubernetes Engine.
We already have plenty of talent in Canada. However, we are not turning out enough computer science graduates to keep up with demand, especially not enough women. We've sat down for an interview with one of own trailblazers, Montrealer Maya Kaczorowski, a Product Manager at Google in Security & Privacy.
This week, McKinsey released a report titled “Making a secure transition to the public cloud,” the result of interviews with IT security experts at nearly 100 enterprises around the world. Leveraging the expertise of Google Cloud and McKinsey security experts, the research presents a strategic framework for IT security in cloud and hybrid environments, and provides recommendations on how to migrate to the cloud while keeping security top of mind.
Protecting your data is of the utmost importance for Google Cloud, and one of the ways we protect customer data is through encryption. We encrypt your data at rest, by default, as well as while it's in transit over the internet from the user to Google Cloud, and then internally when it's moving within Google, for example between data centers. We aim to create trust through transparency, and today, we're releasing a white paper, “Encryption in Transit in Google Cloud,” that describes our approach to protecting data in transit.
Cloud Key Management Service (KMS) is now generally available. Cloud KMS makes it even easier for you to encrypt data at scale, manage secrets and protect your data the way you want — both in the cloud and on-premise. Today, we're also announcing a number of partner options for using Customer-Supplied Encryption Keys.
Google has long supported efforts to encrypt customer data on the internet, including using HTTPS everywhere. In the enterprise space, we're pleased to broaden the continuum of encryption options available on Google Cloud Platform (GCP) with Cloud Key Management Service (KMS), now in beta in select countries.
Control over data or agility of the cloud? Why not both? We are pleased to announce that Customer-Supplied Encryption Keys (CSEK) for Compute Engine is now generally available, allowing you to take advantage of the cloud while protecting your Google Compute Engine disks with keys that you control.