Anthos clusters on bare metal 1.16.0 is now available for download
Share
Services
## Feature
### Release 1.16.0
Anthos clusters on bare metal 1.16.0 is now available for [download](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/downloads). To upgrade, see [Upgrading Anthos on bare metal](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/upgrade). Anthos clusters on bare metal 1.16.0 runs on Kubernetes 1.27.
## Announcement
**Version 1.13 end of life:** In accordance with the [Anthos Version Support Policy](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/getting-support#version-support), version 1.13 (all patch releases) of Anthos clusters on bare metal has reached its end of life and is no longer supported.
Red Hat Enterprise Linux (RHEL) 8 minor versions 8.2, 8.3, 8.4, and 8.5 have reached their [end of life](https://access.redhat.com/support/policy/updates/errata). Please ensure you're using a supported version of your operating system.
## Feature
**Cluster lifecycle:**
* Upgraded to Kubernetes version 1.27.4.
* Added support for Red Hat Enterprise Linux (RHEL) version 8.8.
* **GA:** Added support for [parallel upgrades of worker node pools](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/upgrade#cw-strategy).
* **GA:** Added support to [upgrade specific worker node pools separately from the rest of the cluster](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/upgrade#select%5Fnp%5Fup).
* **GA:** Added a [separate instance of etcd](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/limits#etcd%5Fperformance) for the `etcd-events` object. This new etcd instance is always on and requires [ports 2382 and 2383 to be open on control plane nodes](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/concepts/network-reqs#control%5Fplane%5Fnodes) for inbound TCP traffic. If these ports aren't opened, cluster creation and cluster upgrades are blocked.
* **GA:** Updated preflight checks for cluster installation and upgrades to use changes from the latest Anthos clusters on bare metal patch version to address known issues and provide more useful checks.
* **GA:** Support enrolling admin and user clusters in the Anthos On-Prem API automatically to enable cluster lifecycle management from the Google Cloud CLI, the Google Cloud console, and Terraform when the Anthos On-Prem API is enabled. If needed, you have the option to disable enrollment. For more information, see the description for the [gkeOnPremAPI](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/reference/cluster-config-ref#gkeonpremapi) field in the cluster configuration file.
* Added new health check to detect any unsupported drift in the custom resources managed by Anthos clusters on bare metal. Unsupported resource changes can lead to cluster problems.
* Added a new flag, `--target-cluster-name`, that is supported by the `bmctl register bootstrap` command.
**Networking:**
* **GA:** Added support for Services of type LoadBalancer to use `externalTrafficPolicy=Local` with [bundled load balancing with BGP](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/lb-bundled-bgp).
* **Preview:** Added support for enabling Direct Server Return (DSR) load balancing for clusters configured with flat-mode networking. DSR load balancing is enabled with an annotation, `preview.baremetal.cluster.gke.io/dpv2-lbmode-dsr: dsr`.
* **Preview:** Upgraded wherabouts to v0.6.1-gke.1 to support dual-stack networking.
* Added support for multiple BGP load balancer (`BGPLoadBalancer`) resources and BGP Community. Multiple BGP load balancer resources provide more flexibility to define which peers advertise specific load balancer nodes and Services. BGP Community support helps you to distinguish routes coming from BGP load balancers from other routes in your network.
**Observability:**
* **GA:** Added [support for system metrics](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/log-monitoring#using-managed-service-for-prometheus) when you use Google Cloud Managed Service for Prometheus.
**Security and Identity:**
* **GA:** Added support for Binary Authorization, a service on Google Cloud that provides software supply-chain security for container-based applications. For more information, see [Set up Binary Authorization policy enforcement](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/binary-authorization-policy).
* **GA:** Added support for [VPC Service Controls](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/hardening-your-cluster#vpc-sc), which provides additional security for your clusters to help mitigate the risk of data exfiltration.
* **Preview:** Added support for [using custom cluster certificate authorities (CAs)](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/how-to/custom-cluster-ca) to enable secure authentication and encryption between cluster components.
* **Preview:** Added support for configuring the Subject Alternative Names (SANs) of the kubeadm generated certificate for the kube-apiserver.
* Added support to run keepalived as a non-root user.
## Change
**Functionality changes:**
* Updated constraint on NodePool `spec.upgradeStrategy.concurrentNodes` to be the smaller of 15 nodes or 50% of the size of the node pool.
* Replaced legacy method of enabling application logging in the cluster configuration file with two fields, `enableCloudLoggingForApplications` and `enableGMPForApplications`, in the stackdriver custom resource.
The `spec.clusterOperations.enableApplication` field in the cluster configuration file has no effect on version 1.16.0 and higher clusters. This field populated the `enableStackdriverForApplications` field in the stackdriver custom resource, which enabled annotation based workload metric collection. I you need this capability, use the `annotationBasedApplicationMetrics` feature gate in the stackdriver custom resource as shown in the following sample to keep the same behavior:
```
kind:stackdriver
spec:
enableCloudLoggingForApplications: true
featureGates:
annotationBasedApplicationMetrics: true
```
* Added optional `ksmNodePodMetricsOnly` feature gate in the stackdriver custom resource to reduce the number of metrics from kube-state-metrics. Reducing the number of metrics makes monitoring pipeline more stable in large scale clusters.
* Audit logs are compressed on the wire for Cloud Audit Logs consumption, reducing egress bandwidth by approximately 60%.
* Upgraded local volume provisioner to v2.5.0.
* Upgraded snapshot controller to v5.0.1.
* Deprecated v1beta1 volume snapshot custom resources. Anthos clusters on bare metal will stop serving v1beta1 resources in a future release.
* Removed resource request limits on edge profile workloads.
* Added preflight check to make sure control plane and load balancer nodes aren't under maintenance before an upgrade.
* Updated the cluster snapshot capability so that information can be captured for the target cluster even when the cluster custom resource is missing or unavailable.
* Improved `bmctl` error reporting for failures during the creation of a bootstrap cluster.
* Added support for using the `baremetal.cluster.gke.io/maintenance-mode-deadline-seconds` cluster annotation to specify the maximum node draining duration, in seconds. By default, a 20-minute (1200 seconds) timeout is enforced. When the timeout elapses, all pods are stopped and the node is put into maintenance mode. For example to change the timeout to 10 minutes, add the annotation `baremetal.cluster.gke.io/maintenance-mode-deadline-seconds: "600"` to your cluster.
* Updated `bmctl check cluster` to create a HealthCheck custom resource in the admin cluster if it's healthy.
## Fix
**Fixes:**
* Fixed an issue where the apiserver could become unresponsive during a cluster upgrade for clusters with a single control plane node.
* Fixed an issue where cluster installations or upgrades fail when the cluster name has more than 45 characters.
* Fixed an issue where the control plane VIP wasn't reachable during cluster installation on Red Hat Enterprise Linux.
* Fixed an issue where audit logs were duplicated into the offline buffer even when they are sent to Cloud Audit Logs successfully.
* Fixed an issue where node-specific labels set on the node pool were sometimes overwritten.
* Updated [avoidBuggyIPs](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/reference/cluster-config-ref#loadbalancer-addresspools-avoidbuggyips) and [manualAssign](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/reference/cluster-config-ref#loadbalancer-addresspools-manualassign) fields in load balancer address pools (`spec.loadBalancers.addressPools`) to allow changes at any time.
* Fixed an issue where containerd didn't restart when there was a version mismatch. This issue caused an inconsistent containerd version within the cluster.
* Fixed an issue that caused the logging agent to use continuously increasing amounts of memory.
* Fixed preflight check so that it no longer ignores the `no_proxy` setting.
* Fixed Anthos Identity Service annotation needed for exporting metrics.
* Fixed an issue that caused the `bmctl restore` command to stop responding for clusters with manually configured load balancers.
* Fixed an issue that prevented Anthos clusters on bare metal from restoring a high-availability quorum for nodes that use `/var/lib/etcd` as a mountpoint.
* Fixed an issue that caused health checks to report failure when they find a Pod with a status of `TaintToleration` even when the replicaset for the Pod has sufficient Pods running.
* Fixed an issue that caused conflicts with third-party Ansible automation.
* Fixed a cluster upgrade issue that prevented some control plane nodes from rejoining a cluster configured for high availability.
## Fix
The following container image security vulnerabilities have been fixed:
* Critical container vulnerabilities:
* [CVE-2022-29155](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29155)
* [CVE-2022-29458](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29458)
* [CVE-2023-0464](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0464)
* [CVE-2023-0465](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0465)
* [CVE-2023-0466](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0466)
* [CVE-2023-2283](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2283)
* [CVE-2023-2650](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2650)
* High-severity container vulnerabilities:
* [CVE-2019-19906](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19906)
* [CVE-2020-8032](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8032)
* [CVE-2022-4450](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4450)
* [CVE-2022-4904](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4904)
* [CVE-2022-24407](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24407)
* [CVE-2022-29154](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29154)
* [CVE-2022-32190](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-32190)
* [CVE-2023-0045](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0045)
* [CVE-2023-0215](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0215)
* [CVE-2023-0286](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0286)
* [CVE-2023-0361](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0361)
* [CVE-2023-0386](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0386)
* [CVE-2023-0461](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0461)
* [CVE-2023-1077](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1077)
* [CVE-2023-1078](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1078)
* [CVE-2023-1118](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1118)
* [CVE-2023-1281](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1281)
* [CVE-2023-1670](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1670)
* [CVE-2023-1829](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1829)
* [CVE-2023-1989](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1989)
* [CVE-2023-2454](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2454)
* [CVE-2023-3567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3567)
* [CVE-2023-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23559)
* [CVE-2023-28466](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-28466)
* [CVE-2023-31436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-31436)
* [CVE-2023-32233](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32233)
* Medium-severity container vulnerabilities:
* [CVE-2018-1099](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1099)
* [CVE-2019-9511](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9511)
* [CVE-2019-9513](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9513)
* [CVE-2020-11080](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11080)
* [CVE-2020-13844](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13844)
* [CVE-2021-3468](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3468)
* [CVE-2022-2097](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2097)
* [CVE-2022-3707](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3707)
* [CVE-2022-3821](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3821)
* [CVE-2022-4129](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4129)
* [CVE-2022-4304](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4304)
* [CVE-2022-4382](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4382)
* [CVE-2022-4415](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4415)
* [CVE-2022-23524](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23524)
* [CVE-2022-23525](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23525)
* [CVE-2022-23526](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23526)
* [CVE-2022-36055](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36055)
* [CVE-2023-0458](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0458)
* [CVE-2023-0459](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0459)
* [CVE-2023-1073](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1073)
* [CVE-2023-1074](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1074)
* [CVE-2023-1076](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1076)
* [CVE-2023-1079](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1079)
* [CVE-2023-1667](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1667)
* [CVE-2023-1855](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1855)
* [CVE-2023-1859](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1859)
* [CVE-2023-1990](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1990)
* [CVE-2023-1998](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1998)
* [CVE-2023-2162](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2162)
* [CVE-2023-2194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2194)
* [CVE-2023-2455](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2455)
* [CVE-2023-2985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2985)
* [CVE-2023-3161](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3161)
* [CVE-2023-3220](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3220)
* [CVE-2023-3358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3358)
* [CVE-2023-23916](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23916)
* [CVE-2023-26545](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-26545)
* [CVE-2023-28328](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-28328)
* [CVE-2023-28484](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-28484)
* [CVE-2023-29469](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-29469)
* [CVE-2023-30456](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-30456)
* [CVE-2023-32269](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32269)
* [CVE-2023-33203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-33203)
* [CVE-2023-33288](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-33288)
* Low-severity container vulnerabilities:
* [CVE-2009-5155](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155)
* [CVE-2015-8985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8985)
* [CVE-2019-17594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17594)
* [CVE-2019-17595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17595)
* [CVE-2021-3468](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3468)
* [CVE-2022-2196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2196)
* [CVE-2022-3424](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3424)
* [CVE-2022-4379](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4379)
* [CVE-2023-1513](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1513)
* [CVE-2023-1611](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1611)
* [CVE-2023-1872](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1872)
* [CVE-2023-2163](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2163)
* [CVE-2023-21102](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-21102)
* [CVE-2023-22998](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-22998)
* [CVE-2023-23004](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23004)
* [CVE-2023-25012](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25012)
* [CVE-2023-30772](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-30772)
## Issue
**Known issues:**
For information about the latest known issues, see [Anthos clusters on bare metal known issues](https://cloud.google.com/anthos/clusters/docs/bare-metal/1.16/troubleshooting/known-issues) in the Troubleshooting section.
What else is happening at Google Cloud Platform?
A weekly digest of client library updates from across the Cloud SDK
about 10 hours ago
Services
Share
Deletion protection is now generally available for Filestore instances
about 12 hours ago
Services
Share
A weekly digest of client library updates from across the Cloud SDK
about 14 hours ago
Services
Share
A weekly digest of client library updates from across the Cloud SDK
about 15 hours ago
Services
Share