Maintained with ☕️ by
IcePanel logo

Google Distributed Cloud for bare metal 1.33.0-gke.799 is now available for

Share

Services

## Announcement Google Distributed Cloud for bare metal 1.33.0-gke.799 is now available for[download](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/downloads). To upgrade, see [Upgrade clusters](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/how-to/upgrade). Google Distributed Cloud for bare metal 1.33.0-gke.799 runs on Kubernetes v1.33.2-gke.700. After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the [GKE On-Prem API clients](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/installing/cluster-lifecycle-management-tools): the Google Cloud console, the gcloud CLI, and Terraform. If you use a third-party storage vendor, check the [Ready storage partners](https://cloud.google.com/anthos/docs/resources/partner-storage) document to make sure the storage vendor has already passed the qualification for this release of Google Distributed Cloud for bare metal. ## Feature The following features were added in 1.33.0-gke.799: * **GA**: Introduced an Envoy sidecar into the GKE Identity Service to increase security, reliability, and performance. * **GA**: Added support for the Ubuntu 24.04 LTS operating system with Linux kernel versions, such as 6.8 and 6.11\. Support for Linux kernel 6.14 is explicitly excluded. * **GA**: Added the ability to override the cluster-level pod density setting for individual node pools. * **Preview**: Added Node Agent to give you the ability to transition from using Ansible over SSH for cluster operations to a more secure, agent-based model. Added [bmctl nodeagent](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/reference/bmctl#nodeagent)commands to provide a straightforward and reliable process of migrating existing clusters to use Node Agent. * **Preview**: Added a bundled version of the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html#about-the-nvidia-gpu-operator)(version 25.3.1). The bundled operator is an open-source solution for managing the NVIDIA software components needed to provision and manage GPU devices. * **Preview**: Added Dynamic Resource Allocation, a Kubernetes API that lets you request and share generic resources, such as GPUs, among pods and containers. When enabled, this capability helps you run AI workloads by dynamically and precisely allocating the GPU resources within your bare metal clusters, improving resource utilization and performance for demanding workloads. * **Preview**: Added vertical Pod autoscaling, which lets you analyze and set CPU and memory resources required by Pods. Instead of having to set up-to-date CPU requests and limits and memory requests and limits for the containers in your Pods, you can configure vertical Pod autoscaling to provide recommended values for CPU and memory requests and limits that you can use to manually update your Pods, or you can configure vertical Pod autoscaling to automatically update the values. * **Preview**: Added support for skip minor version cluster upgrades. You can directly upgrade your cluster control plane nodes (and entire cluster if worker node pools aren't pinned at a lower version) to two minor versions above the current version. Added the [bmctl upgrade intermediate-version](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/reference/bmctl#upgrade%5Fintermediate-version)to print the intermediate version for a skip minor version upgrade. * Surface failures from node pool status to the `RecentFailures` field in cluster status. * Surface failures from failed preflight checks triggered by the cluster controller to the `RecentFailures` field in cluster status. ## Change The following functional changes were made in 1.33.0-gke.799: * Changed logging behavior so that kubeadm logs show up in the journald of the node machine where kubeadm runs. * To help prevent stale ARP cache issues, `iptables-persistent` is installed in Debian nodes. * Cluster manifests are deployed using a Kubernetes job, allowing the cluster operator to be more responsive to cluster events. * Updated the validation checks for cluster upgrades to enforce the [cluster version skew rules](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/how-to/upgrade-lifecycle#version%5Fskew)for user clusters. If the upgrade version information for a user cluster doesn't comply with the version skew rules, the upgrade is halted. * Updated health checks and upgrade preflight checks to inspect for kubeadm certificate expiration. * Updated etcd version to 3.5.21. * Removed support for Red Hat Enterprise Linux 8.8 as it is beyond the [Red Hat support window](https://access.redhat.com/support/policy/updates/errata#RHEL8%5Fand%5F9%5FLife%5FCycle). * Removed support for Ubuntu 20.04 LTS as it has reached the end of [standard security maintenance in May 2025](https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle). * Upgraded `ansible-core` to 2.16.4 to support Python 3.12. * Increased the RSA key size for Cluster API certifications to 4096 bits for improved security. ## Fix The following issues were fixed in 1.33.0-gke.799: * Fixed an issue where restoring a cluster that has a node with a GPU causes instability of pods on the nodes. * Fixed an issue that caused the Ansible playbook for handling Cloud Audit Logging to fail and not complete. * Fixed an issue that caused nodes to get stuck in maintenance mode. Health checks have been updated so that the network check job skips connectivity checks for nodes that are in maintenance mode. * Fixed an issue where the CronJob for periodic health checks wasn't updating after configuration changes. * Fixed vulnerabilities listed in [Vulnerability fixes](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/vulnerabilities). ## Issue For information about the latest known issues, see [Google Distributed Cloud for bare metal known issues](https://cloud.google.com/kubernetes-engine/distributed-cloud/bare-metal/docs/troubleshooting/known-issues) in the Troubleshooting section.