From d30434022d7ef35e2b845a586dc1f6b9ebc4e75d Mon Sep 17 00:00:00 2001 From: Mangirdas Judeikis Date: Tue, 24 Jun 2025 14:24:51 +0300 Subject: [PATCH 1/2] doc: Swap arguments places --- docs/content/setup/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/setup/index.md b/docs/content/setup/index.md index f92b533e..1efa2f3f 100644 --- a/docs/content/setup/index.md +++ b/docs/content/setup/index.md @@ -15,7 +15,7 @@ helm repo add kcp https://kcp-dev.github.io/helm-charts And then install the chart: ```sh -helm upgrade --install --create-namespace --namespace kcp-operator kcp/kcp-operator kcp-operator +helm upgrade --install --create-namespace --namespace kcp-operator kcp-operator kcp/kcp-operator ``` ## Further Reading From ffdbbc4662a0e62b1180327dffb5cad9e438414a Mon Sep 17 00:00:00 2001 From: Marvin Beckers Date: Thu, 26 Jun 2025 12:54:53 +0200 Subject: [PATCH 2/2] Add root shard, front proxy and kubeconfig to getting started guide On-behalf-of: SAP Signed-off-by: Marvin Beckers --- ... operator.kcp.io_v1alpha1_kubeconfig.yaml} | 0 docs/content/setup/index.md | 6 +- docs/content/setup/quickstart.md | 147 +++++++++++++++++- docs/main.py | 9 +- 4 files changed, 148 insertions(+), 14 deletions(-) rename config/samples/{operator.kcp.io_v1alpha1_kubeconfig_frontproxy.yaml => operator.kcp.io_v1alpha1_kubeconfig.yaml} (100%) diff --git a/config/samples/operator.kcp.io_v1alpha1_kubeconfig_frontproxy.yaml b/config/samples/operator.kcp.io_v1alpha1_kubeconfig.yaml similarity index 100% rename from config/samples/operator.kcp.io_v1alpha1_kubeconfig_frontproxy.yaml rename to config/samples/operator.kcp.io_v1alpha1_kubeconfig.yaml diff --git a/docs/content/setup/index.md b/docs/content/setup/index.md index 1efa2f3f..ccc2638c 100644 --- a/docs/content/setup/index.md +++ b/docs/content/setup/index.md @@ -2,7 +2,7 @@ ## Requirements -- [cert-manager](https://cert-manager.io/) +- [cert-manager](https://cert-manager.io/) (see [Installing with Helm](https://cert-manager.io/docs/installation/helm/)) ## Helm Chart @@ -15,9 +15,11 @@ helm repo add kcp https://kcp-dev.github.io/helm-charts And then install the chart: ```sh -helm upgrade --install --create-namespace --namespace kcp-operator kcp-operator kcp/kcp-operator +helm install --create-namespace --namespace kcp-operator kcp-operator kcp/kcp-operator ``` +For full configuration options, check out the Chart [values](https://github.com/kcp-dev/helm-charts/blob/main/charts/kcp-operator/values.yaml). + ## Further Reading {% include "partials/section-overview.html" %} diff --git a/docs/content/setup/quickstart.md b/docs/content/setup/quickstart.md index e032f1f6..abce3670 100644 --- a/docs/content/setup/quickstart.md +++ b/docs/content/setup/quickstart.md @@ -1,24 +1,26 @@ --- description: > - Take your first steps after installing kcp-operator. + Create your first objects after installing kcp-operator. --- # Quickstart -Make sure you have kcp-operator installed according to the instructions given in [Setup](./index.md). +kcp-operator has to be installed according to the instructions given in [Setup](./index.md) before starting the steps below. -## RootShard +## etcd !!! warning Never deploy etcd like below in production as it sets up an etcd instance without authentication or TLS. -Running a root shard requires a running etcd instance/cluster. You can set up a simple one via Helm: +Running a root shard requires a running etcd instance/cluster. A simple one can be set up with Helm and the Bitnami etcd chart: ```sh -$ helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd --set auth.rbac.enabled=false --set auth.rbac.create=false +helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd --set auth.rbac.enabled=false --set auth.rbac.create=false ``` -In addition, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI CAs. You can create a self-signing one: +## Create Root Shard + +In addition to a running etcd, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI. Create a self-signing one: ```yaml apiVersion: cert-manager.io/v1 @@ -29,7 +31,9 @@ spec: selfSigned: {} ``` -Afterward, create a `RootShard` object. You can find documentation for it in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md). +Afterward, create the first `RootShard` object. API documentation is available in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md). + +The main change to make is replacing `example.operator.kcp.io` with a hostname to be used for the kcp instance. The DNS entry should not be set yet. ```yaml apiVersion: operator.kcp.io/v1alpha1 @@ -42,16 +46,143 @@ spec: hostname: example.operator.kcp.io port: 6443 certificates: + # this references the issuer created above issuerRef: group: cert-manager.io kind: Issuer name: selfsigned cache: embedded: + # kcp comes with a cache server accessible to all shards, + # in this case it is fine to enable the embedded instance enabled: true etcd: endpoints: + # this is the service URL to etcd. Replace if Helm chart was + # installed under a different name or the namespace is not "default" - http://etcd.default.svc.cluster.local:2379 ``` -kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard. +kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard and the necessary PKI infrastructure (via cert-manager). + +## Set up Front Proxy + +Every kcp instance deployed with kcp-operator needs at least one instance of kcp-front-proxy to be fully functional. Multiple front-proxy instances can exist to provide access to a complex, multi-shard geo-distributed setup. + +For getting started, a `FrontProxy` object can look like this: + +```yaml +apiVersion: operator.kcp.io/v1alpha1 +kind: FrontProxy +metadata: + name: frontproxy +spec: + rootShard: + ref: + # the name of the RootShard object created before + name: root + serviceTemplate: + spec: + # expose this front-proxy via a load balancer + type: LoadBalancer +``` + +kcp-operator will deploy a kcp-front-proxy installation based on this and connect it to the `root` root shard created before. + +### DNS Setup + +Once the `Service` `-front-proxy` has successfully been reconciled, it should have either an IP address or a DNS name (depending on which load balancing integration is active on the Kubernetes cluster). A DNS entry for the chosen external hostname (this was set in the `RootShard`) has to be set and should point to the IP address (with an A/AAAA DNS entry) or the DNS name (with a CNAME DNS entry). + +Assuming this is what the `frontproxy-front-proxy` `Service` looks like: + +```sh +kubectl get svc frontproxy-front-proxy +``` + +Output should look like this: + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +frontproxy-front-proxy LoadBalancer 10.240.30.54 XYZ.eu-central-1.elb.amazonaws.com 6443:32032/TCP 3m13s +``` + +Now a CNAME entry from `example.operator.kcp.io` to `XYZ.eu-central-1.elb.amazonaws.com` is required. + +!!! hint + Tools like [external-dns](https://github.com/kubernetes-sigs/external-dns) can help with automating this step to avoid manual DNS configuration. + +## Initial Access + +Once deployed, a `Kubeconfig` object can be created to generate credentials to initially access the kcp setup. An admin kubeconfig can be generated like this: + +```yaml +apiVersion: operator.kcp.io/v1alpha1 +kind: Kubeconfig +metadata: + name: kubeconfig-kcp-admin +spec: + # the user name embedded in the kubeconfig + username: kcp-admin + groups: + # system:kcp:admin is a special privileged group in kcp. + # the kubeconfig generated from this should be kept secure at all times + - system:kcp:admin + # the kubeconfig will be valid for 365d but will be automatically refreshed + validity: 8766h + secretRef: + # the name of the secret that the assembled kubeconfig should be written to + name: admin-kubeconfig + target: + # a reference to the frontproxy deployed previously so the kubeconfig is accepted by it + frontProxyRef: + name: frontproxy +``` + +Once `admin-kubeconfig` has been created, the generated kubeconfig can be fetched: + +```sh +kubectl get secret admin-kubeconfig -o jsonpath="{.data.kubeconfig}" | base64 -d > admin.kubeconfig +``` + +To use this kubeconfig, set the `KUBECONFIG` environment variable appropriately: + +```sh +export KUBECONFIG=$(pwd)/admin.kubeconfig +``` + +It is now possible to connect to the kcp instance and create new workspaces via [kubectl create-workspace](https://docs.kcp.io/kcp/latest/setup/kubectl-plugin/): + +```sh +kubectl get ws +``` + +Initially, the command should return that no workspaces exist yet: + +``` +No resources found +``` + +To create a workspace, run: + +```sh +kubectl create-workspace test +``` + +Output should look like this: + +``` +Workspace "test" (type root:organization) created. Waiting for it to be ready... +Workspace "test" (type root:organization) is ready to use. +``` + +Congratulations, you've successfully set up kcp and connected to it! :tada: + + + +## Further Reading + +- Check out the [CRD documentation](../reference/index.md) for all configuration options. diff --git a/docs/main.py b/docs/main.py index 1e678294..6081c811 100644 --- a/docs/main.py +++ b/docs/main.py @@ -13,6 +13,7 @@ # limitations under the License. import copy +import os.path def define_env(env): """ @@ -55,10 +56,10 @@ def section_items(page, nav, config): # Copy so we don't modify the original child = copy.deepcopy(child) - - # Subsection nesting that works across any level of nesting - # Replaced mkdocs fix_url function - child.file.url = child.url.replace(page.url, "./") + + # mkdocs hates if a link in the generated Markdown (!) is already a fully-fledged URL + # and not a link to a file anymore, so we replace the URL with the file path here. + child.file.url = os.path.basename(child.file.src_uri) siblings.append(child) return siblings