Anthos clusters on VMware 1.16.0-gke.669 is now available. To upgrade, see
Share
Services
## Announcement
Anthos clusters on VMware 1.16.0-gke.669 is now available. To upgrade, see[Upgrading Anthos clusters on VMware](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/upgrading). Anthos clusters on VMware 1.16.0-gke.669 runs on Kubernetes 1.27.4-gke.1600\.
## Feature
* **Preview**: You can[migrate from the Seesaw load balancer to MetalLB](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/migrate-seesaw-metallb).
* **Preview**: Support[the load balancing mode for a cluster that has Dataplane V2 enabled](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/user-cluster-configuration-file#dataplanev2-lbmode-field).
* **Preview**: Support[user-managed admin workstations](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/create-admin-workstation).
* **Preview**: Support [preparing credentials as Kubernetes secrets for admin clusters](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/prepared-credentials-admin). See also the[Secrets configuration file](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/secrets-configuration-file) reference.
* **GA**: Support for vSphere 8.0.
* **GA**: Support enrolling admin and user clusters in the Anthos On-Prem API automatically to enable cluster lifecycle management from the Google Cloud CLI, the Google Cloud console, and Terraform when the Anthos On-Prem API is enabled. If needed, you have the option to disable enrollment. For more information, see[Admin cluster configuration file](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/admin-cluster-configuration-file#gkeOnPremAPI-section)and[User cluster configuration file](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/user-cluster-configuration-file#gkeOnPremAPI-section).
* **GA**: Logging and monitoring agents on each cluster now include[kube-state-metrics and node-exporter](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/concepts/logging-and-monitoring#stackdriver%5Fgkeop).
* **GA**: Support for[high-availability control plane for admin clusters](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/admin-cluster-configuration-file#adminmaster-replicas-field).
* **GA**: Support for[VM-Host affinity ](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/configure-vm-host-affinity)for user cluster node pools.
* **GA**: Support for[user cluster storage policy based management (SPBM) ](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/configure-storage-policy).
* **GA**:[Google managed service for Prometheus](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/logging-and-monitoring#using-managed-service-for-prometheus)supports system metrics.
* **GA**: Support[disabling bundled Istio ingress controller](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/user-cluster-configuration-file#disablebundledingress-field)in the user cluster configuration.
* **GA**: Enforce[the same project ID and location](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/multiple-gcp-projects#fleet%5Fhost%5Fproject)for new cluster creation.
* **GA**: Support for using `gkectl` to update secret encryption.
* **GA**: Support for[enabling or disabling antiAffinityGroups](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/admin-cluster-configuration-file#antiaffinitygroups-enabled-field).
## Change
**Version changes:**
* Upgraded VMware vSphere Container Storage Plug-in from 3.0 to 3.0.2.
* The `crictl` command-line tool was updated to 1.27.
* The `containerd` config was updated to version 2.
**Other changes:**
* The output of the[gkectl diagnose cluster command](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/diagnose)has been updated to provide a summary that customers can copy and paste when opening support cases.
* In-tree GlusterFS is removed from Kuberentes 1.27\. Add storage validation to detect in-tree glusterFS volumes.
* Metrics data are now gzip compressed when sending to Cloud Monitoring.
* The stackdriver-log-forwarder (fluent-bit) now sends logs to Cloud Logging with gzip compression to reduce egress bandwidth needed.
* Prometheus and Grafana are no longer bundled for in-cluster monitoring and they are replaced with Google Cloud Managed Service for Prometheus.
* The following flags in the[stackdriver custom resource](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/application-logging-monitoring)are deprecated and changes to their values aren't honored:
* `scalableMonitoring`
* `enableStackdriverForApplications` (replaced by`enableGMPForApplications` and `enableCloudLoggingForApplications`)
* `enableCustomMetricsAdapter`
* Deploying the vSphere cloud controller manager in both admin and user clusters, and enabling it for admin and kubeception user clusters is now supported.
* The audit-proxy now sends audit logs to Cloud Audit Logging with gzip compressed to reduce egress bandwidth needed.
* Removed `accounts.google.com` from the internet preflight check requirement.
* The [pre-defined dashboards](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/reference/predefined-dashboards)are automatically present based on the presence of metrics.
* Enabled auto repair on ReadonlyFilesystem node condition
* Support the `d` character when using `--log-since` flag to take cluster snapshot. For example: `gkectl diagnose snapshot --log-since=1d`
* A new CSI Workload preflight check was added to verify that workloads using vSphere PVs can work through CSI.
* Preflight check failures for `gkectl prepare` now block install and upgrade operations.
* The kubelet readonly port is now disabled by default for security enhancement. See[Enable kubelet readonly port](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/how-to/enable-kubelet-readonly-port)for instructions if you need to re-enable it for legacy reasons.
* AIS Pods are now scheduled to run on control plane nodes instead of worker nodes.
## Fix
The following issues are fixed in 1.16.0-gke.669:
* Fixed the[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#admin-ssh-public-key-error-after-admin-cluster-upgrade-or-update) that caused intermittent ssh errors on non-HA admin master after update or upgrade.
* Fixed the[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#upgrading-an-admin-cluster-enrolled-in-the-anthos-on-prem-api-could-fail) where upgrading enrolled admin cluster could fail due to membership update failure.
* Fixed the issue where the CPv1 stackdriver operator had `--is-kubeception-less=true` specified by mistake.
* Fixed the issue where clusters used the non-high-availability (HA) Connect Agent after an upgrade to 1.15.
* Fixed the[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#cloud-audit-logging-failure-due-to-permission-denied)of Cloud Audit Logging failure due to permission denied.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#update-user-cluster-failed-after-ksa-signing-key-rotation)where the update operation cannot be fulfilled due to KSA signing key version unmatched.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#in-the-private-registry-username-causes-admin-control-plane-machine-startup-failure)where $ in the private registry username caused admin control plane machine startup failure.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#gkectl-diagnose-snapshot---log-since-fails-to-limit-the-time-window-for-journalctl-commands-running-on-the-cluster-nodes)where `gkectl diagnose snapshot` failed to limit the time window for`journalctl`commands running on the cluster nodes when you take a cluster snapshot with the `--log-since` flag.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#nodes-fail-to-register-if-configured-hostname-contains-a-period)where node ID verification failed to handle hostnames with dots.
* Fixed continuous increase of logging agent memory.
* Fixed the issue that caused `gcloud` to fail to update the platform when the `required-platform-version` is already the current platform version.
* Fixed an issue where `cluster-api-controllers` in a high-availability admin cluster had no Pod anti-affinity. This could allow the three`clusterapi-controllers` Pods not to be scheduled on different control-plane nodes.
* Fixed the wrong admin cluster resource link annotation key that can cause the cluster to be enrolled again by mistake.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#node-pool-creation-fails-because-of-redundant-vm-host-affinity-rules)where node pool creation failed because of duplicated VM-Host affinity rules.
* The preflight check for StorageClass parameter validations now throws a warning instead of a failure on ignored parameters after CSI Migration. StorageClass parameter `diskformat=thin` is now allowed and does not generate a warning.
* Fixed a false error message for `gkectl prepare` when using a high-availability admin cluster.
* Fixed an issue during the migration from the Seesaw load balancer to MetalLB that caused 'DeprecatedKubeception' always shows up in the diff.
* Fixed a[known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/known-issues#unsuccessful-failover-on-ha-controlplane-v2-user-cluster-and-admin-cluster-when-the-network-filters-out-duplicate-garp-requests)where some cluster nodes couldn't access the HA control plane when the underlying network performs ARP suppression.
* Removed unused Pod disruption budgets (such as `kube-apiserver-pdb`, `kube-controller-manager-pdb`, and `kube-etcd-pdb`) for Controlplane V2 user clusters
## Fix
The following vulnerabilities are fixed in 1.16.0-gke.669:
* Critical container vulnerabilities:
* [CVE-2022-29155](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29155)
* High-severity container vulnerabilities:
* [CVE-2023-0286](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0286)
* [CVE-2023-2828](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2828)
* [CVE-2023-27561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27561)
* [CVE-2022-29458](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29458)
* [CVE-2023-3138](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3138)
* [CVE-2020-7712](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7712)
* [CVE-2015-3276](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3276)
* [CVE-2020-8032](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8032)
* [CVE-2023-0215](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0215)
* [CVE-2023-0361](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0361)
* [CVE-2022-4450](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4450)
* [CVE-2023-2454](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2454)
* [CVE-2022-29154](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29154)
* [CVE-2023-1999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1999)
* Container-optimized OS vulnerabilities:
* [CVE-2023-2609](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-2609)
* [CVE-2023-0386](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0386)
* [CVE-2023-1872](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-1872)
* [CVE-2023-27561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27561)
* [CVE-2023-3090](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-3090)
* [CVE-2023-24329](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-24329)
* Windows vulnerabilities:
* [CVE-2022-41723](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41723)
* [CVE-2022-41725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41725)
What else is happening at Google Cloud Platform?
You can now set up AlloyDB clusters using a copy of your Cloud SQL for PostgreSQL backup
about 3 hours ago
Services
Share
If your GKE cluster was created before version 1.26, you can now migrate it to cgroupv2
about 6 hours ago
Services
Share
You can set up AlloyDB clusters using a copy of your Cloud SQL for PostgreSQL backup
about 7 hours ago
Services
Share
Read update
Services
Share