-
Notifications
You must be signed in to change notification settings - Fork 115
Add Kubermatic-Virtualization docs #2020
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| +++ | ||
| title = "Kubermatic Virtualization Docs" | ||
| description = "Seamlessly modernize your infrastructure by building your private cloud entirely with Kubernetes" | ||
| sitemapexclude = true | ||
| +++ | ||
|
|
||
| Seamlessly modernize your infrastructure by building your private cloud entirely with Kubernetes |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,32 @@ | ||
| +++ | ||
| title = "" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| +++ | ||
|
|
||
| ## What is Kubermatic Virtualization (Kube-V)? | ||
| Kubermatic Virtualization (Kube-V) provides a unified platform that enables organizations to seamlessly orchestrate and manage both traditional virtual machines (VMs) and modern containerized applications. | ||
|
|
||
| It extends the powerful automation and operational benefits of Kubernetes to your VM-based workloads, allowing for a more consistent and efficient approach to infrastructure management. | ||
|
|
||
| Kubermatic Virtualization leverages Kubernetes-native management by unifying VM and container orchestration as it integrates virtual machines (VMs) directly into Kubernetes as native, first-class objects by managing, scaling, and deploying VMs using the same familiar Kubernetes tools, APIs, and workflows you already use for your containerized applications. | ||
| ## Features | ||
| Kubermatic Virtualization offers a comprehensive set of features designed to modernize infrastructure and streamline operations by converging virtual machine and container management. | ||
|
|
||
| ### Streamlined Transition and Unified Control | ||
|
|
||
| * Effortless Migration: Tools are provided to simplify the migration of existing VMs from diverse environments to the unified platform, making infrastructure modernization more accessible. | ||
| * Centralized Operations: Gain single-pane-of-glass management for the entire lifecycle of both VMs and containers. This includes everything from creation, networking, and storage to scaling and monitoring, all accessible from a centralized interface or command-line tools. | ||
|
|
||
| ### Infrastructure Modernization and Efficiency | ||
|
|
||
| * Gradual Modernization Path: Integrate VMs into a cloud-native environment, offering a practical pathway to modernize legacy applications without the immediate need for extensive refactoring into containers. You can run new containerized applications alongside existing virtualized ones. | ||
| * Optimized Resource Use: By running VMs and containers on the same underlying physical infrastructure, organizations can achieve better hardware resource utilization and significantly reduce operational overhead. | ||
|
|
||
| ### Enhanced Development and Reliability | ||
|
|
||
| * Improved Developer Experience: Developers can leverage familiar, native Kubernetes tools and workflows for managing both VMs and containers, which minimizes learning curves and speeds up development cycles. | ||
| * Automated Workflows (CI/CD): Integrate VMs seamlessly into Kubernetes-native CI/CD pipelines, enabling automated testing and deployment processes. | ||
| * Built-in Resilience: Benefit from the platform's inherent high availability and fault tolerance features, including automated restarts and live migration of VMs between nodes, ensuring continuous application uptime. | ||
| * Integrated Networking and Storage: VMs natively use the platform's software-defined networking (SDN) and storage capabilities, providing consistent network policies, enhanced security, and streamlined storage management. | ||
|
|
||
| See [kubermatic.com](https://www.kubermatic.com/). |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,38 @@ | ||
| +++ | ||
| title = "Architecture" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 5 | ||
|
|
||
| +++ | ||
|
|
||
| ## Architecture Overview | ||
| Kubermatic-Virtualization (Kube-V) is an advanced platform engineered to construct private cloud infrastructures founded | ||
| entirely on Kubernetes. Its core design principle is the seamless integration of Kubernetes-native workloads (containers) | ||
| and traditional virtualized workloads (Virtual Machines - VMs) under a unified management umbrella. Kube-V achieves this | ||
| by building upon Kubernetes as its foundational layer and incorporating KubeVirt to orchestrate and manage VMs alongside | ||
| containerized applications. | ||
|
|
||
| Here's a breakdown of the architecture and how these components interact: | ||
| ### Host Nodes | ||
| Host nodes can operate on any popular Linux-based operating system such as Ubuntu and RockyLinux where nested virtualization | ||
| is enabled to run KVM based virtual machines. | ||
|
|
||
| ### Kubernetes | ||
| The foundation, providing the orchestration, scheduling, and management plane for all workloads. In addition to introduce | ||
| declarative API and custom resources (CRDs). | ||
|
|
||
| ### KubeVirt | ||
| An extension to Kubernetes that enables running and managing VMs as native Kubernetes objects. It utilizes Kubernetes pods | ||
| as the execution unit each running VM is encapsulated within a standard Kubernetes pod, specifically a virt-launcher pod. | ||
|
|
||
| ### OVN (Open Virtual Network) | ||
| The network fabric, providing advanced SDN (Software-Defined Networking) capabilities for VMs and Pods, replacing or | ||
| augmenting the default CNI (Container Network Interface). The network fabric introduces VPCs(Virtual Private Cloud) as | ||
| an operational and isolated ecosystem, through subnets and network policies. | ||
|
|
||
| ### CSI Drivers | ||
| A standardized interface that allows Kubernetes to connect to various storage systems, providing persistent storage for | ||
| VMs and containers. Kube-V is agnostic about the storage of the underlying infrastructure where any CSI driver can be | ||
| used to enabling dynamic provisioning, attachment, and management of persistent volumes for VMs and Pods. | ||
|
|
||
|  |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| +++ | ||
| title = "Compatibility" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 5 | ||
| +++ |
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,21 @@ | ||||||||||
| +++ | ||||||||||
| title = "Kubermatic Virtualization Components" | ||||||||||
| date = 2025-07-18T16:06:34+02:00 | ||||||||||
| weight = 5 | ||||||||||
| +++ | ||||||||||
|
|
||||||||||
| The following list is only applicable for the Kube-V version that is currently available. Kubermatic has a strong emphasis | ||||||||||
| on security and reliability of provided software and therefore releases updates regularly that also include component updates. | ||||||||||
|
|
||||||||||
|
|
||||||||||
| | Kube-V Component | Version | | ||||||||||
| |:---------------------------------:|:-------:| | ||||||||||
| | Kubernetes | v1.33.0 | | ||||||||||
| | KubeVirt | v1.5.2 | | ||||||||||
| | Containerized Data Importer (CDI) | v1.62.0 | | ||||||||||
| | KubeOVN | v1.14.4 | | ||||||||||
| | KubeOne | v1.11.1 | | ||||||||||
| | Kyverno | v1.14.4 | | ||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
| | Cert Manager | v1.18.2 | | ||||||||||
| | MetalLB | v0.15.2 | | ||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Longhorn and Multus are missing
Suggested change
|
||||||||||
|
|
||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| +++ | ||
| title = "Operating Systems" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 3 | ||
| +++ | ||
|
|
||
| ## Supported Operating Systems | ||
|
|
||
| The following operating systems are supported: | ||
|
|
||
| * Ubuntu 20.04 (Focal) | ||
| * Ubuntu 22.04 (Jammy Jellyfish) | ||
| * Ubuntu 24.04 (Noble Numbat) | ||
| * Rocky Linux 8 | ||
| * RHEL 8.0, 8.1, 8.2, 8.3, 8.4 | ||
| * Flatcar | ||
|
Comment on lines
+14
to
+16
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI, Longhorn prerequisites are supported for the Debian distro. |
||
|
|
||
| {{% notice warning %}} | ||
| The minimum kernel version for Kubernetes 1.32 clusters is 4.19. Some operating system versions, such as RHEL 8, | ||
| do not meet this requirement and therefore do not support Kubernetes 1.32 or newer. | ||
| {{% /notice %}} | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| +++ | ||
| title = "Concepts" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 1 | ||
| +++ | ||
|
|
||
| Get to know the concepts behind Kubermatic Virtualization (KubeV). |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| +++ | ||
| title = "Compute" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 15 | ||
| +++ |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,241 @@ | ||
| +++ | ||
| title = "VirtualMachines Resources" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 15 | ||
| +++ | ||
|
|
||
| ## VirtualMachines | ||
| As the name suggests, a VirtualMachine(VM) represents a long-running, stateful virtual machine. It's similar to a | ||
| Kubernetes Deployment for Pods, meaning you define the desired state (e.g., "this VM should be running," "it should | ||
| have 2 CPUs and 4GB RAM") and Kubermatic-Virtualization ensures that state is maintained. It allows you to start, stop, and configure VMs. | ||
|
|
||
| Here is an example of how users can create a VM: | ||
| ```yaml | ||
| apiVersion: kubevirt.io/v1 | ||
| kind: VirtualMachine | ||
| metadata: | ||
| name: my-vm-with-http-data-volume | ||
| spec: | ||
| runStrategy: RerunOnFailure | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: my-vm-with-http-data-volume | ||
| annotations: | ||
| kubevirt.io/allow-pod-bridge-network-live-migration: "true" | ||
| spec: | ||
| domain: | ||
| cpu: | ||
| cores: 1 | ||
| memory: | ||
| guest: 2Gi | ||
| devices: | ||
| disks: | ||
| - name: rootdisk | ||
| disk: | ||
| bus: virtio | ||
| interfaces: | ||
| - name: default | ||
| masquerade: {} | ||
| volumes: | ||
| - name: rootdisk | ||
| dataVolume: | ||
| name: my-http-data-volume | ||
| networks: | ||
| - name: default | ||
| pod: {} | ||
| dataVolumeTemplates: | ||
| - metadata: | ||
| name: my-http-data-volume | ||
| spec: | ||
| sourceRef: | ||
| kind: DataSource | ||
| name: my-http-datasource | ||
| apiGroup: cdi.kubevirt.io | ||
| pvc: | ||
| accessModes: | ||
| - ReadWriteOnce | ||
| resources: | ||
| requests: | ||
| storage: 10Gi # <--- IMPORTANT: Adjust to your desired disk size | ||
| # storageClassName: my-storage-class # <--- OPTIONAL: Uncomment and replace with your StorageClass name if needed | ||
| --- | ||
| apiVersion: cdi.kubevirt.io/v1beta1 | ||
| kind: DataSource | ||
| metadata: | ||
| name: my-http-datasource | ||
| spec: | ||
| source: | ||
| http: | ||
| url: "http://example.com/path/to/your/image.qcow2" # <--- IMPORTANT: Replace with the actual URL of your disk image | ||
| # certConfig: # <--- OPTIONAL: Uncomment and configure if your HTTP server uses a custom CA | ||
| # caBundle: "base64encodedCABundle" | ||
| # secretRef: | ||
| # name: "my-http-cert-secret" | ||
| # cert: | ||
| # secretRef: | ||
| # name: "my-http-cert-secret" | ||
| # key: | ||
| # secretRef: | ||
| # name: "my-http-key-secret" | ||
| ``` | ||
| ### 1. `VirtualMachine` (apiVersion: `kubevirt.io/v1`) | ||
|
|
||
| This is the main KubeVirt resource that defines your virtual machine. | ||
|
|
||
| - **`spec.template.spec.domain.devices.disks`**: | ||
| Defines the disk attached to the VM. We reference `rootdisk` here, which is backed by our DataVolume. | ||
|
|
||
| - **`spec.template.spec.volumes`**: | ||
| Links the `rootdisk` to a `dataVolume` named `my-http-data-volume`. | ||
|
|
||
| - **`spec.dataVolumeTemplates`**: | ||
| This is the crucial part. It defines a template for a DataVolume that will be created automatically when the VM is started. | ||
|
|
||
| --- | ||
|
|
||
| ### 2. `DataVolumeTemplate` (within `VirtualMachine.spec.dataVolumeTemplates`) | ||
|
|
||
| - **`metadata.name`**: | ||
| The name of the DataVolume that will be created (referenced in `spec.template.spec.volumes`). | ||
|
|
||
| - **`spec.sourceRef`**: | ||
| Points to a `DataSource` resource that defines the actual source of the disk image. A `DataSource` is used here to encapsulate HTTP details. | ||
|
|
||
| - **`spec.pvc`**: | ||
| Defines the characteristics of the PersistentVolumeClaim (PVC) that will be created for this DataVolume: | ||
|
|
||
| - **`accessModes`**: Typically `ReadWriteOnce` for VM disks. | ||
| - **`resources.requests.storage`**: | ||
| ⚠️ **Crucially, set this to the desired size of your VM's disk.** It should be at least as large as your source image. | ||
| - **`storageClassName`**: *(Optional)* Specify a StorageClass if needed; otherwise, the default will be used. | ||
|
|
||
| --- | ||
|
|
||
| ### 3. `DataSource` (apiVersion: `cdi.kubevirt.io/v1beta1`) | ||
|
|
||
| This is a CDI (Containerized Data Importer) resource that encapsulates the details of where your disk image comes from. | ||
|
|
||
| - **`metadata.name`**: | ||
| The name of the `DataSource` (referenced in `dataVolumeTemplate.spec.sourceRef`). | ||
|
|
||
| - **`spec.source.http.url`**: | ||
| 🔗 This is where you put the direct URL to your disk image (e.g., a `.qcow2`, `.raw`, etc. file). | ||
|
|
||
| - **`spec.source.http.certConfig`**: *(Optional)* | ||
| If your HTTP server uses a custom CA or requires client certificates, configure them here. | ||
|
|
||
| --- | ||
|
|
||
| ### VirtualMachinePools | ||
| KubeVirt's VirtualMachinePool is a powerful resource that allows you to manage a group of identical Virtual Machines (VMs) | ||
| as a single unit, similar to how a Kubernetes Deployment manages a set of Pods. It's designed for scenarios where you need | ||
| multiple, consistent, and often ephemeral VMs that can scale up or down based on demand. | ||
|
|
||
| Here's a breakdown of the key aspects of KubeVirt VirtualMachinePools: | ||
|
|
||
|
|
||
| ```yaml | ||
| apiVersion: kubevirt.io/v1alpha1 | ||
| kind: VirtualMachinePool | ||
| metadata: | ||
| name: my-vm-http-pool | ||
| spec: | ||
| replicas: 3 # <--- IMPORTANT: Number of VMs in the pool | ||
| selector: | ||
| matchLabels: | ||
| app: my-vm-http-pool-member | ||
| virtualMachineTemplate: | ||
| metadata: | ||
| labels: | ||
| app: my-vm-http-pool-member | ||
| annotations: | ||
| kubevirt.io/allow-pod-bridge-network-live-migration: "true" | ||
| spec: | ||
| runStrategy: RerunOnFailure # Or Always, Halted, Manual | ||
| domain: | ||
| cpu: | ||
| cores: 1 | ||
| memory: | ||
| guest: 2Gi | ||
| devices: | ||
| disks: | ||
| - name: rootdisk | ||
| disk: | ||
| bus: virtio | ||
| interfaces: | ||
| - name: default | ||
| masquerade: {} | ||
| volumes: | ||
| - name: rootdisk | ||
| dataVolume: | ||
| name: my-pool-vm-data-volume # This name will have a unique suffix appended by KubeVirt | ||
| networks: | ||
| - name: default | ||
| pod: {} | ||
| dataVolumeTemplates: | ||
| - metadata: | ||
| name: my-pool-vm-data-volume # This name will be the base for the unique DataVolume names | ||
| spec: | ||
| sourceRef: | ||
| kind: DataSource | ||
| name: my-http-datasource | ||
| apiGroup: cdi.kubevirt.io | ||
| pvc: | ||
| accessModes: | ||
| - ReadWriteOnce | ||
| resources: | ||
| requests: | ||
| storage: 10Gi # <--- IMPORTANT: Adjust to your desired disk size for each VM | ||
| # storageClassName: my-storage-class # <--- OPTIONAL: Uncomment and replace with your StorageClass name if needed | ||
| --- | ||
| apiVersion: cdi.kubevirt.io/v1beta1 | ||
| kind: DataSource | ||
| metadata: | ||
| name: my-http-datasource | ||
| spec: | ||
| source: | ||
| http: | ||
| url: "http://example.com/path/to/your/image.qcow2" # <--- IMPORTANT: Replace with the actual URL of your disk image | ||
| # certConfig: # <--- OPTIONAL: Uncomment and configure if your HTTP server uses a custom CA | ||
| # caBundle: "base64encodedCABundle" | ||
| # secretRef: | ||
| # name: "my-http-cert-secret" | ||
| # cert: | ||
| # secretRef: | ||
| # name: "my-http-cert-secret" | ||
| # key: | ||
| # secretRef: | ||
| # name: "my-http-key-secret" | ||
|
|
||
| ``` | ||
| ### VirtualMachinePool (apiVersion: `kubevirt.io/v1alpha1`) | ||
|
|
||
| 1. **`API Version`** | ||
| - Use `apiVersion: kubevirt.io/v1alpha1` for `VirtualMachinePool`. | ||
| - This is a slightly different API version than `VirtualMachine`. | ||
|
|
||
| 2. **`spec.replicas`** | ||
| - Specifies how many `VirtualMachine` instances the pool should maintain. | ||
|
|
||
| 3. **`spec.selector`** | ||
| - Essential for the `VirtualMachinePool` controller to manage its VMs. | ||
| - `matchLabels` must correspond to the `metadata.labels` within `virtualMachineTemplate`. | ||
|
|
||
| 4. **spec.virtualMachineTemplate** | ||
| - This section contains the full `VirtualMachine` spec that serves as the template for each VM in the pool. | ||
|
|
||
| 5. **`dataVolumeTemplates` Naming in a Pool** | ||
| - `VirtualMachinePool` creates `DataVolumes` from `dataVolumeTemplates`. | ||
| - A unique suffix is appended to the `metadata.name` of each `DataVolume` (e.g., `my-pool-vm-data-volume-abcde`), ensuring each VM gets a distinct PVC. | ||
|
|
||
| --- | ||
|
|
||
| ### How It Works (Similar to Deployment for Pods) | ||
|
|
||
| 1. Apply the `VirtualMachinePool` manifest. KubeVirt ensures the `my-http-datasource` `DataSource` exists. | ||
| 2. The `VirtualMachinePool` controller creates the defined number of `VirtualMachine` replicas. | ||
| 3. Each `VirtualMachine` triggers the creation of a `DataVolume` using the specified `dataVolumeTemplate` and `my-http-datasource`. | ||
| 4. CDI (Containerized Data Importer) downloads the image into a new unique `PersistentVolumeClaim` (PVC) for each VM. | ||
| 5. Each `VirtualMachine` then starts using its dedicated PVC. | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| +++ | ||
| title = "Networking" | ||
| date = 2025-07-18T16:06:34+02:00 | ||
| weight = 15 | ||
| +++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.