Anthos clusters on VMware 1.15.0-gke.581 is now available. To upgrade, see Upgrading Anthos clusters on VMware
Share
Services
## Feature
Anthos clusters on VMware [1.15.0-gke.581](https://cloud.google.com/anthos/clusters/docs/on-prem/latest/version-history) is now available. To upgrade, see [Upgrading Anthos clusters on VMware](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/upgrading). Anthos clusters on VMware 1.15.0-gke.581 runs on Kubernetes 1.26.2-gke.1001.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.15, 1.14, and 1.13.
## Feature
* **Preview**: Support for vSphere 8.0
* **Preview**: Support for [VM-Host affinity](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/configure-vm-host-affinity) for user cluster node pools
* **Preview**: Support for [High availability control plane](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/admin-cluster-configuration-file#adminmaster-replicas-field) for admin clusters
* **Preview**: Support for [system metrics collection](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/logging-and-monitoring) using Google Cloud Managed Service for Prometheus
* **Preview**: You can now filter application logs by namespace, Pod labels and content regex.
* **Preview**: Support for storage policy in user clusters
* **Preview**: You can now use `gkectl diagnose snapshot --upload=true` to upload a snapshot. And `gkectl` helps generate the Cloud Storage bucket with the format gs://anthos-snapshot\[uuid\]/vmware/$snapshot-name.
* **GA**: Support for [upgrade and rollback of node pool version](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/upgrade-node-pools)
* **GA**: `gkectl get-config` is a new command that locally [generates cluster configuration files](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/how-to/get-configuration-file-from-cluster) from an existing admin or user cluster.
* **GA**: Support for multi-line parsing of Go and Java logs
* **GA**: Support for manual load balancing in user clusters that enable ControlplaneV2
* **GA**: Support for update of private registry credentials
* **GA**: Metrics and logs in the bootstrap cluster are now uploaded to Google Cloud through Google Cloud's operations suite to provide better observability on admin cluster operations.
* **GA**: vSphere CSI is now enabled for Windows node pools.
* Fully managed Cloud Monitoring Integration dashboards. The new Integration Dashboard is automatically installed. You cannot make changes to the following dashboards, because they are fully managed by Google. However, you can make a copy of a dashboard and customize the copied version:
* Anthos Cluster Control Plane Uptime
* Anthos Cluster Node Status
* Anthos Cluster Pod Status
* Anthos Cluster Utilization Metering
* Anthos Cluster on VMware VM Status
## Breaking
* [CSI migration](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/concepts/storage#csi%5Fmigration%5Ffor%5Fthe%5Fvsphere%5Fstorage%5Fdriver) for the vSphere storage driver is enabled by default. A new storage preflight check and a new CSI workload preflight check verify that PersistentVolumes that used the old in-tree vSphere storage driver will continue to work with the vSphere CSI driver. There is a [known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#upgrade-storageclass-diskformat) during admin cluster upgrade. If you see a preflight check about a StorageClass `diskformat` parameter, you can use `--skip-validation-cluster-health` to skip the check. This issue will be fixed in a future release.
* The minimum required version of vCenter and ESXi is 7.0 Update 2.
## Change
* Admin cluster update operations are now managed by an admin cluster controller.
* The Connect Agent now runs in high availability mode.
* The metrics server now runs in high-availability mode.
* Upgraded the [VMware vSphere Container Storage Plug-in](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/index.html) from 2.7 to 3.0\. This includes support for Kubernetes version 1.26\. For more information, see the plug-in [release notes](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/3.0/rn/vmware-vsphere-container-storage-plugin-30-release-notes/index.html).
* Upgraded Anthos Identity Service to hybrid\_identity\_charon\_20230313\_0730\_RC00.
* Switched the node selector from `node-role.kubernetes.io/master` to `node-role.kubernetes.io/control-plane` and added toleration `node-role.kubernetes.io/control-plane` to system components.
* Controlplane V2 is now the default for new user clusters.
* Now when you delete a Controlplane V2 user cluster , the data disk is automatically deleted.
* Cluster DNS now supports ordering policy for upstream servers.
* Added admin cluster CA certificate validation to the admin cluster upgrade preflight check.
* Upgraded Anthos Network Gateway to 1.4.4.
* Updated `anthos-multinet`.
* When you upload and share a snapshot using `gkectl diagnose snapshot` with a Google Support team service account `service-[GOOGLE_CLOUD_PROJECT_NUMBER]@gcp-sa-anthossupport.iam.gserviceaccount.com`, `gkectl` helps provision the service account automatically.
* Upgraded `node-exporter` from 1.0.1 to 1.4.1.
* Upgraded Managed Service for Prometheus for application metrics from 0.4 to 0.6.
* We now allow storage DRS to be enabled in manual mode.
* GKE connect is now required for admin clusters, and you cannot skip the corresponding validation. You can register existing admin clusters by using `gkectl update admin`.
* We no longer silently skip saving empty files in diagnose snapshots, but instead collect the names of those files in a new `empty_snapshots` file in the snapshot tarball.
* We now mount `/opt/data` using disk label `data`.
* In the vSphere CSI driver, enabled `improved-csi-idempotency` and `async-query-volume`, and disabled `trigger-csi-fullsync`. This enhances the vSphere CSI driver to ensure volume operations are idempotent.
* Changed the relative file path fields in the admin cluster configuration file to use absolute paths
* Removed `kubectl describe` events in cluster snapshots for a better user experience. `kubectl describe` events fail when the target event expires. In contrast `kubectl get` events survive and provide enough debugging information.
## Change
**Deprecations**
* Support for `gkeadm` on MAC and Windows is deprecated.
* The `enableWindowsDataplaneV2` field in the user cluster configuration file is deprecated.
* The `gkectl enroll cluster` command is deprecated. Use `gcloud` to enroll a user cluster instead.
* The following dashboards in the Cloud Monitoring Sample Library will be deprecated in a future release:
* Anthos cluster control plane uptime
* Anthos cluster node status
* Anthos cluster pod status
* Anthos utilization metering
* GKE on-prem node status
* GKE on-prem control plane uptime
* GKE on-prem pod status
* GKE on-prem vSphere vm health status
* In a future release, the following customized dashboards will not be created when you create a new cluster:
* GKE on-prem node status
* GKE on-prem control plane uptime
* GKE on-prem pod status
* GKE on-prem vSphere vm health status
* GKE on-prem Windows pod status
* GKE on-prem Windows node status
## Fix
* Fixed the false error message generated by the cluster autoscaler about a missing ClusterRoleBinding. After a user cluster is deleted, that ClusterRoleBinding is no longer needed.
* Fixed an issue where `gkectl check-config` failed (nil pointer error) during validation for Manual load balancing.
* Fixed an issue where the cluster autoscaler did not work when Controlplane V2 was enabled.
* Fixed an issue where using `gkectl update` to enable Cloud Audit Logs did not work.
* Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.
* We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during update.
* Fixed an issue where disks might be out of order during the first boot.
* Fixed an issue where the private registry credentials file for the user cluster could not be loaded.
* Fixed an issue where the user-cluster node options and startup script used the cluster version instead of the node pool version.
* Fixed an issue where `gkectl diagnose cluster` didn't check the health of control-plane Pods for kubeception user clusters.
* Fixed an issue where KSASigningKeyRotation always showed as an unsupported change during user cluster update.
* Fixed an issue where a cluster might not be registered when the initial membership creation attempt failed.
* Fixed an issue where user cluster data disk validation used the cluster-level `vCenter.datastore` instead of `masterNode.vsphere.datastore`.
* Fixed an issue where `component-access-sa-key` was missing in the `admin-cluster-creds` Secret after admin cluster upgrade.
* Fixed an issue where during user cluster upgrade, the cluster state indicated that upgrade had completed before CA rotation had completed.
* Fixed an issue where advanced networking components were evicted or not scheduled on nodes because of Pod priority.
* Fixed a [known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/1.14/known-issues#pod-create-or-delete-errors-due-to-calico-cni-service-account-auth-token-issue) where the `calico-node` Pod was unable to renew the auth token in the calico CNI kubeconfig file.
* Fixed Anthos Identity Service metric exporting issues.
* During preflight checks and cluster diagnosis, we now skip PersistentVolumes and PersistentVolumeClaims that use non-vSphere drivers.
* Fixed a [known issue](https://cloud.google.com/anthos/clusters/docs/on-prem/1.14/known-issues#pod-create-or-delete-errors-due-to-calico-cni-service-account-auth-token-issue) where CIDR ranges could not be used in the IP block file.
* Fixed an issue where auto resizing of CPU and memory for an admin cluster add-on node got reset by an admin cluster controller.
* `anet-operator` can now be scheduled to a Windows node in a user cluster that has Controlplane V2 enabled.
## Fix
Fixed the following vulnerabilities:
* Critical container vulnerabilities:
* [CVE-2022-32221](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-32221)
* [CVE-2022-47629](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-47629)
* [CVE-2021-46848](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-46848)
* [CVE-2022-41903](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41903)
* [CVE-2022-23521](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23521)
* High-severity container vulnerabilities:
* [CVE-2022-3094](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3094)
* [CVE-2023-23916](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23916)
* [CVE-2022-42898](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42898)
* [CVE-2021-3449](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3449)
* [CVE-2023-26604](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-26604)
* [CVE-2023-23946](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23946)
* [CVE-2022-39260](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-39260)
* [CVE-2022-3970](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3970)
* [CVE-2022-23218](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23218)
* [CVE-2022-23219](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23219)
* [CVE-2021-3999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3999)
* [CVE-2019-25013](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-25013)
* [CVE-2021-33574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33574)
* Container-optimized OS vulnerabilities:
* [CVE-2023-28466](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-28466)
* [CVE-2023-0461](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0461)
* [CVE-2020-17437](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-17437)
* [CVE-2022-32149](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-32149)
* [CVE-2022-40320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40320)
* [CVE-2019-18276](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18276)
* [CVE-2022-40304](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40304)
* Ubuntu vulnerabilities:
* [CVE-2022-4203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4203)
* [CVE-2022-4304](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4304)
* [CVE-2022-4450](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-4450)
* [CVE-2023-0215](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0215)
* [CVE-2023-0216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0216)
* [CVE-2023-0217](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0217)
* [CVE-2023-0286](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0286)
* [CVE-2023-0401](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0401)
* [CVE-2022-28321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-28321)
* [CVE-2022-3328](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3328)
## Issue
Known issues:
* [You might see a false error message about vCenter.dataDisk](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#false-error-message-about-vcenter.datadisk).
* [Node pool creation might fail because of redundant VM-Host affinity rules](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#node-pool-creation-fails-because-of-redundant-vm-host-affinity-rules).
* [gkectl repair admin-master might fail because of failure to delete the admin master node object](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#gkectl-repair-admin-master-may-fail-due-to-failed-to-delete-the-admin-master-node-object-and-reboot-the-admin-master-vm).
* [Pods might remain in Failed state after re-creation or update of a control-plane node](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#pods-remain-in-failed-state-afer-re-creation-or-update-of-a-control-plane-node).
* [OnPremUserCluster might not become ready because private registry credentials aren't in a prepared Secret](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#onpremusercluster-not-ready-because-of-private-registry-credentials).
* [gkectl upgrade admin might fail because of diskformat parameter in StorageClass](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#upgrade-storageclass-diskformat).
* [Migrated in-tree vSphere volumes using the Windows file system can't be used with vSphere CSI driver](https://cloud.google.com/anthos/clusters/docs/on-prem/1.15/known-issues#migrated-in-tree-vsphere-volumes-using-the-windows-file-system-cant-be-used-with-vsphere-csi-driver).
What else is happening at Google Cloud Platform?
You can now set up AlloyDB clusters using a copy of your Cloud SQL for PostgreSQL backup
about 8 hours ago
Services
Share
If your GKE cluster was created before version 1.26, you can now migrate it to cgroupv2
about 11 hours ago
Services
Share
You can set up AlloyDB clusters using a copy of your Cloud SQL for PostgreSQL backup
about 12 hours ago
Services
Share
Read update
Services
Share