ALPS blog

The Fault in Our Kubelets: Analyzing the Security of Publicly Exposed Kubernetes Clusters

While researching cloud-native tools and how they can reveal information about a system or an organization, we came across some data sets from Shodan concerning Kubernetes clusters (aka K8s). Specifically, we found 243,469 Kubernetes clusters publicly exposed and identified on Shodan. Furthermore, these clusters also exposed port 10250, which is used by kubelet by default.

fig1-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 1. Top five countries with the highest number of exposed kubelet ports from the Shodan query <product:”Kubernetes” port:10250>; search performed on April 19, 2022

This data is relatively new, and by analyzing the historical trend provided, we see that this data was added in July 2021, less than a year ago as of this writing. While using Shodan, we also identified the top 10 organizations hosting Kubernetes clusters and exposing the same kubelet port to the internet.

fig2-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 2. Historical trend for the query <product:”Kubernetes” port:10250> using Shodan; search performed on April 19, 2022
fig3-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 3. Top 14 organizations hosting Kubernetes clusters with the exposed kubelet port; search performed on April 19, 2022 via Shodan

The kubelet
The kubelet is the agent that runs on each node and ensures that all containers are running in a pod. It is also the agent responsible for any configuration changes on the nodes and has three main functions:

  • Helps nodes join the Kubernetes cluster
  • Starts and manages the health of containers running on its node
  • Keeps the control plane up to date on the node status and other information
fig4-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 4. The kubelet agent and how it works inside Kubernetes

With this in mind, we are concerned with cybercriminal developments where attackers abuse the kubelet API as an entry point in targeting Kubernetes clusters to mine for cryptocurrency, as we reported last year. The method of abusing container administration services to execute commands inside is listed on the MITRE ATT&CK for containers as a technique — T1609 – Container Administration Command — which we contributed to the knowledge base by sharing our research and data.

The kubelet API

The port 10250 is used by the kubelet API by default. It is open on all nodes of a cluster, including the API server control plane and worker nodes. Usually, this port is only exposed internally and is not accessible via external services. Requests to the kubelet’s API endpoint, which are not blocked by other authentication methods, are treated as anonymous requests by default. The kubelet is undocumented and one of the API endpoints is the /runningpods, which returns all pods running on the node that the kubelet is in. There is also the /run endpoint, which allows the user to run commands directly on the pods. For more information on the kubelet API endpoints, we recommend looking at the open-source tool kubeletctl, as this helps query the kubelet API just like kubectl does for the Kubernetes API server.

Analyzing data from Shodan

After seeing this number of Kubernetes clusters with their kubelets exposed to the internet, we had two questions in mind: How many of those clusters are leaking cluster information via the kubelet, and how many of them might be vulnerable to attacks via the kubelet? We downloaded and triaged the data from Shodan to identify the clusters that would respond to anonymous requests to the kubelet API. With the IP address information provided and a simple script to make requests to the kubelet API, we were able to gather some interesting information from the exposed Kubernetes nodes and kubelets. Results from our analysis of over 240,000 exposed Kubernetes nodes showed that most of the clusters tested block anonymous requests by returning the HTTP “401 Status Code – Unauthorized,” or were unreachable during the time of our requests.

fig5-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 5. Number of exposed kubelets and the respective responses

This is what you would probably see when accessing the API endpoint via the browser, and what might be considered normal behavior for accessing APIs with unauthenticated tokens:

fig6-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 6. A kubelet API endpoint returning a 401 “Unauthorized” notification to anonymous requests

At first, this might appear as a good sign. However, if an attacker can compromise the kubelet authentication token, these clusters could be in danger. In addition, this information already means that there is a Kubernetes cluster running in that environment, which can lead to the attacker trying other K8s exploits and vulnerabilities to infiltrate the environment.

Almost 76,000 requests didn’t return any response either by timing out after 10 seconds or by refusing to connect on that port. We think this is reasonable given that these environments are ephemeral, and nodes can be created or destroyed based on demand.

We also noticed that almost 3,500 servers returned a “403 – Forbidden” notification instead of the more common 401 response. This means that the kubelet API allowed the unauthenticated request, but identified that we didn’t have the proper permissions (authorization) to access that specific endpoint. And as we can see from the image below, it clearly states why it is blocking the anonymous request (user=system:anonymous) from getting information about the running pods (verb=get, resource=nodes).

fig7-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 7. kubelet API endpoint returning a 403 notice to anonymous requests

The last response, although on a lower number, was the “200 – OK,” meaning that some nodes running a kubelet responded with information regarding what pods were running on that node. This is a JSON response with information regarding the pod’s name, namespace where it is running inside the cluster, as well as which containers are running inside each pod. One pod can have one or more containers running inside it. Here’s an example of an exposed kubelet returning information about its running pods and containers:

fig8-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 8. An exposed kubelet returning data on the running pods on the node.

We didn’t try to execute commands on any of those pods that returned information from the kubelet API endpoint. But judging from their previous response, there is a high possibility that the requests to the /run endpoint would also succeed. This means that an attacker would be able to install and run programs directly on those pods just by using the kubelet API. Again, this is like our documentation of what TeamTNT did to multiple clusters last year. Considering that this now from external requests, however, this can make things even easier for the attackers.

fig9-analyzing-the-security-of-publicly-exposed-kubernetes-clusters
Figure 9. An example of multiple run commands performed by cybercriminals such as TeamTNT to compromise K8s clusters via the kubelet API

Protecting the kubelet

While exposure of the kubelet has unfortunately become common as we have previously written about, this is one of the first few instances where we observed this many exposed nodes in one scan. In the wrong hands, these exposed nodes (kubelets) that list all the pods and respond with information on the endpoints (Response 200) can have permissions to deploy malicious pods such as cryptominers using the kubelet API. They can also deploy pods to steal secrets and credentials, and maybe even delete the entire node. For ethical reasons, we have chosen not to verify this.

In particular, it is important to note here that since the organizations affected use the managed versions of Kubernetes, cloud service providers (CSPs) can improve their services to their customers by identifying and alerting their clients of their exposed and accessible kubelets. To prevent this issue from taking place in your cluster, it is important to keep in mind two critical factors for kubelet security settings: authentication and authorization.

  • Enabling Kubelet authentication. According to Kubernetes documentation, requests to the kubelet’s API endpoint (which are not blocked by other authentication methods) are treated as anonymous requests by default. It’s important to ensure that you start the kubelets with the <–anonymous-auth=false> flag and disable anonymous access. This will not only disable anonymous access but also send the “401 – Unauthorized” responses to any unauthenticated requests to the kubelet. For more information, check the Kubernetes official recommendations on Kubelet authentication.
  • Enabling Kubelet authorization. Any successful request that is authenticated by the kubelet API is automatically authorized. This is because the default authorization mode is set to “AlwaysAllow,” which allows all requests to the API. By enabling authorization properly, users can specify which HTTP methods and endpoints are allowed access by different users or service accounts. If the user is not authorized to access that specific resource, they will receive a “403 – Forbidden” response. For more information, check the Kubernetes official recommendations on Kubelet authorization.

With Kubernetes’ popularity and adoption, users should remain vigilant about its security. More information about protecting your Kubernetes cluster can be found in our two-part article “The Basics of Keeping Kubernetes Cluster Secure” (part 1 is here and part 2 is here). We also recommend customizing security settings as a best practice to keep your kubelets protected and to mitigate against the impact of threats, as follows:

  • Restrict kubelet permissions to prevent attackers from reading kubelet credentials after breaking out of the container to perform malicious actions.
  • Rotate the kubelet certificates. In the instance of a compromise, the certificates will be short-lived and the potential impact to clusters will be reduced.
Facebook
Twitter
LinkedIn

Featured News