/posts/ 2025/easy-vclusters-with-flux-and-oidc
Apr 29, 2025
vCluster is a great tool for anyone managing larger Kubernetes clusters which may contain multiple users, environments, a need for larger isolation between workloads. For me vCluster works brilliantly as a method of testing new applications in a disposable cluster where I can run some of the larger, more integrated operators and tooling without much fear of interfering with my existing Kubernetes setup.
In my cluster I make use of Flux to manage my manifest and resources in a GitOps method, and using vCluster with Flux and a dash of OIDC makes for an excellent user experience.
As vCluster is available as a Helm chart, we’re going to use standard Flux HelmRepository
and HelmRelease
to get our initial virtual cluster installed on our cluster.
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: loft
namespace: flux-system
spec:
interval: 30m
url: https://charts.loft.sh
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: test
namespace: vclusters
spec:
interval: 12h
chart:
spec:
chart: vcluster
sourceRef:
kind: HelmRepository
name: loft
namespace: flux-system
interval: 12h
Using the vcluster
CLI command, we can connect to the new virtual cluster and use our standard kubectl
, k9s
or Lens to connect to it.
$ vcluster list
NAME | NAMESPACE | STATUS | VERSION | CONNECTED | AGE
-------+-----------+---------+---------+-----------+------------
test | vclusters | Running | 0.24.1 | | 43h43m51s
At first the cluster is only accessible using a port forwarding proxy, but with a small config change we can make it externally accessible via a LoadBalancer
which will make it more accessible for standard users.
values:
service:
enabled: true
spec:
type: LoadBalancer
You can take this a step further and configure an Ingress
config for the API, with all the niceties that provides. In my case, I couldn’t easily configure TLS pass-through for the API service using Traefik, so I just stuck with a LoadBalancer
service.
Now we can grab the endpoint IP of the service:
$ kubectl get services -n vclusters
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns-x-kube-system-x-test ClusterIP 10.96.13.10 <none> 53/UDP,53/TCP,9153/TCP 43h
test LoadBalancer 10.96.172.117 10.101.10.132 443:30413/TCP 43h
test-headless ClusterIP None <none> 443/TCP 43h
test-node-prod-master-01-int-doofnet-uk ClusterIP 10.96.37.40 <none> 10250/TCP 43h
Next, we want to enable OIDC. By default vCluster uses a standard k8s
installation for the virtual cluster, so configuration is as easy as providing a few configuration options, just like you would in the standard kube-apiserver
.
values:
controlPlane:
distro:
k8s:
apiServer:
extraArgs:
- --oidc-issuer-url=https://auth.domain.com
- --oidc-client-id=000111000aaabbbccc
- "--oidc-username-prefix=oidc:"
- "--oidc-groups-prefix=oidc:"
- --oidc-groups-claim=groups
- --oidc-username-claim=email
The default vcluster
CLI doesn’t support this use case out of the box, so you’ll need to modify the cluster definition in your kubeconfig
file to reference your OIDC login user
rather than the global certificate. But, you’ll connect and get API errors for accessing the common elements, you need to create a ClusterRoleBinding
for your newly defined OIDC users. In my case, the groups are prefixed with oidc:
so creating a binding is relatively easy. Next, you need to feed that into the vCluster bootstrapping to create the extra items.
experimental:
deploy:
vcluster:
manifests: |-
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-cluster-admin
spec: {}
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: oidc:admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
With this example, the admin
group defined by OIDC is given full cluster-admin
role. The experimental.deploy.vcluster.manifests
value can pass any number of YAML documents as needed, so you can pass a whole collection if needed that defines your whole permissions structure.
So we’re using Flux to manage the parent cluster, and create the virtual cluster within it, but can we use Flux to manage the configuration of the virtual cluster? Sure, two options is either to deploy Flux within the virtual cluster itself, or use the parent cluster’s Flux to connect to the virtual cluster. For my cluster I decided to keep it all my clusters in one single repository, so why not include my virtual cluster as well?
First of all we need to make sure that the virtual cluster API server is accessible within the parent cluster. By default a service is created but out of the box it’ll be missing something that’ll trip you up: the virtual cluster API certificates don’t have a valid SAN for the cluster.local
DNS record the Service will be assigned.
Providing additional SANs can be done by making use of the controlPlane.proxy.extraSANs
value:
values:
controlPlane:
proxy:
extraSANs:
- test.vclusters.svc.cluster.local
The SAN we’ve defined is test.vclusters.svc.cluster.local
, as our Service is named test
in vclusters
, and it would be the correct DNS record entry for the Service within the cluster. Next we need to make sure that the virtual cluster’s kubeconfig
is exported with this DNS name included. By default the URL for the API will be localhost
to allow for the port forwarding proxy.
values:
exportKubeConfig:
server: https://test.vclusters.svc.cluster.local
Now the vc-test
secret will contain the correct URL in the server
value. When we now use this kubeconfig
from Flux the URL for the API will be correct for the virtual cluster.
To configure Flux, all you need to do is define another Kustomization
resource which points to your path in your Flux repository, and define a kubeConfig
value pointing at the virtual cluster kubeconfig
:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: test-vcluster
namespace: vclusters
spec:
interval: 30m
path: ./clusters/test-vcluster
prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
kubeConfig:
secretRef:
name: vc-test
key: config
Flux will now do its thing, and you should have managed resources on your virtual cluster! As Events
are mirrored over to the parent cluster, you should be able to see any defined configuration being spun up in the virtual cluster.
Here is our final, full deployment of the virtual cluster. You could potentially make use of Kustomization
templating to generate HelmRelease
resources for each cluster you need from a single template, but this is beyond the scope of this post.
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: test
namespace: vclusters
spec:
interval: 12h
chart:
spec:
chart: vcluster
sourceRef:
kind: HelmRepository
name: loft
namespace: flux-system
interval: 12h
values:
controlPlane:
distro:
k8s:
apiServer:
extraArgs:
- --oidc-issuer-url=https://auth.domain.com
- --oidc-client-id=000111000aaabbbccc
- "--oidc-username-prefix=oidc:"
- "--oidc-groups-prefix=oidc:"
- --oidc-groups-claim=groups
- --oidc-username-claim=email
proxy:
extraSANs:
- test.vclusters.svc.cluster.local
service:
enabled: true
spec:
type: LoadBalancer
serviceMonitor:
enabled: true
exportKubeConfig:
server: https://test.vclusters.svc.cluster.local
experimental:
deploy:
vcluster:
manifests: |-
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-cluster-admin
spec: {}
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: oidc:admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
vcluster.yaml
configuration reference - The vcluster.yaml
is the same values as the Helm values.