Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Migrating to Namespace-Scoped Manager

This guide covers converting existing cluster-scoped projects to namespace-scoped deployment.

By default, Kubebuilder scaffolds cluster-scoped managers that watch and manage resources across all namespaces. This guide shows how to convert an existing cluster-scoped project to namespace-scoped deployment, limiting the manager to watch only specific namespace(s).

When to Use Namespace-Scoped

Use namespace-scoped when:

  • Building tenant-specific managers in multi-tenant clusters
  • Security policies require least-privilege (no cluster-wide permissions)
  • Need multiple manager instances in different namespaces
  • Managing only namespace-scoped resources (Deployments, Services, ConfigMaps, etc.)

Use cluster-scoped (default) when:

  • Managing cluster-scoped resources (Nodes, ClusterRoles, Namespaces, etc.)
  • Single manager instance managing resources across all namespaces

Migration Steps

Quick Summary:

  1. Run kubebuilder edit --namespaced --force - scaffolds Role/RoleBinding and updates manager.yaml
  2. Update cmd/main.go to configure namespace-scoped cache
  3. Add namespace= parameter to RBAC markers in existing controller files
  4. Run make manifests - regenerate RBAC from updated markers
  5. Verify and deploy

Detailed Steps:

1. Enable namespace-scoped mode

kubebuilder edit --namespaced --force

This command automatically:

  • Sets namespaced: true in your PROJECT file
  • Scaffolds config/rbac/role.yaml with kind: Role (namespace-scoped)
  • Scaffolds config/rbac/role_binding.yaml with kind: RoleBinding
  • Regenerates config/manager/manager.yaml with WATCH_NAMESPACE environment variable
  • Regenerates admin/editor/viewer roles with kind: Role (namespace-scoped) for all existing APIs

Note: The --force flag regenerates config/manager/manager.yaml. Without --force, you must manually add WATCH_NAMESPACE (see below).

2. Update cmd/main.go (Required Manual Step)

The edit command cannot update cmd/main.go automatically. You must manually add namespace-scoped configuration.

a. Add import:

import (
    // ... existing imports ...
    "sigs.k8s.io/controller-runtime/pkg/cache"
)

b. Add helper functions (after init() and before main()):

// getWatchNamespace returns the namespace(s) the manager should watch for changes.
// It reads the value from the WATCH_NAMESPACE environment variable.
func getWatchNamespace() (string, error) {
    watchNamespaceEnvVar := "WATCH_NAMESPACE"
    ns, found := os.LookupEnv(watchNamespaceEnvVar)
    if !found {
        return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar)
    }
    return ns, nil
}

// setupCacheNamespaces configures the cache to watch specific namespace(s).
func setupCacheNamespaces(namespaces string) cache.Options {
    defaultNamespaces := make(map[string]cache.Config)
    for _, ns := range strings.Split(namespaces, ",") {
        defaultNamespaces[strings.TrimSpace(ns)] = cache.Config{}
    }
    return cache.Options{
        DefaultNamespaces: defaultNamespaces,
    }
}

c. In main() function, before ctrl.NewManager(), add:

// Get the namespace(s) for namespace-scoped mode from WATCH_NAMESPACE environment variable.
watchNamespace, err := getWatchNamespace()
if err != nil {
    setupLog.Error(err, "Unable to get WATCH_NAMESPACE")
    os.Exit(1)
}

d. Update manager creation to use namespace-scoped cache:

mgrOptions := ctrl.Options{
    Scheme:                 scheme,
    Metrics:                metricsServerOptions,
    WebhookServer:          webhookServer,
    HealthProbeBindAddress: probeAddr,
    LeaderElection:         enableLeaderElection,
    LeaderElectionID:       "your-leader-election-id",
    // ... other existing options ...
}

// Configure cache to watch namespace(s) specified in WATCH_NAMESPACE
mgrOptions.Cache = setupCacheNamespaces(watchNamespace)
setupLog.Info("Watching namespace(s)", "namespaces", watchNamespace)

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), mgrOptions)
if err != nil {
    setupLog.Error(err, "Failed to start manager")
    os.Exit(1)
}

3. Update RBAC markers in existing controllers

For each existing controller file, add the namespace= parameter to RBAC markers.

Find controller files:

  • Look for files containing func (r *SomeReconciler) Reconcile(
  • Common locations: internal/controller/*_controller.go

In internal/controller/cronjob_controller.go:

Before (cluster-scoped):

// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {

After (namespace-scoped):

// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {

Replace project-system with your namespace (found in config/default/kustomization.yaml under the namespace: field).

4. Regenerate RBAC manifests

After updating RBAC markers in Step 3, regenerate the RBAC manifests:

make manifests      # Regenerate RBAC from updated controller markers

Verify the generated files show kind: Role instead of kind: ClusterRole:

config/rbac/role.yaml:

kind: Role
metadata:
  name: manager-role
  # Note: namespace is added by kustomize during build, not in source

**config/rbac/*_editor_role.yaml, _viewer_role.yaml, _admin_role.yaml:

kind: Role
metadata:
  name: cronjob-editor-role
  # Note: namespace is added by kustomize during build, not in source

5. Verify and deploy

Run tests to verify everything works:

make generate       # Regenerate code
make test           # Run tests

Deploy and verify:

make deploy IMG=<your-image>

# Verify RBAC is namespace-scoped (not cluster-scoped)
kubectl get role,rolebinding -n <manager-namespace>

# Test: Create a resource in the manager's namespace - should be reconciled
kubectl apply -f config/samples/ -n <manager-namespace>

# Test: Create a resource in a different namespace - should NOT be reconciled
kubectl apply -f config/samples/ -n other-namespace

AI-Assisted Migration

If you’re using an AI coding assistant (Cursor, GitHub Copilot, etc.), you can automate the manual migration steps.

Multi-Namespace Support

The WATCH_NAMESPACE environment variable supports comma-separated values to watch multiple specific namespaces:

env:
- name: WATCH_NAMESPACE
  value: "namespace-1,namespace-2,namespace-3"

Note: You’ll need to create Role/RoleBinding in each namespace for proper RBAC.

Reverting to Cluster-Scoped

To revert back to cluster-scoped:

kubebuilder edit --namespaced=false --force

This command automatically:

  • Sets namespaced: false in your PROJECT file
  • Scaffolds config/rbac/role.yaml with kind: ClusterRole
  • Scaffolds config/rbac/role_binding.yaml with kind: ClusterRoleBinding
  • With --force: Regenerates config/manager/manager.yaml without WATCH_NAMESPACE env var

Manual steps required:

  1. Remove namespace= parameter from RBAC markers in all controller files
  2. Run make manifests to regenerate cluster-scoped RBAC
  3. Remove namespace-scoped code from cmd/main.go:
    • Remove getWatchNamespace() function
    • Remove setupCacheNamespaces() function
    • Remove namespace retrieval and cache configuration
    • Remove added imports (fmt, strings, cache) if not used elsewhere
  4. If you didn’t use --force, manually remove WATCH_NAMESPACE from config/manager/manager.yaml

Important Notes

  • Only controllers need RBAC updates: Only update +kubebuilder:rbac markers in controller files (files with Reconcile function). Webhook files do NOT use RBAC markers - webhooks use certificate-based authentication with the API server.
  • Webhooks remain cluster-scoped: ValidatingWebhookConfiguration and MutatingWebhookConfiguration are cluster-scoped resources that validate/mutate CRs in all namespaces. This is correct - webhooks enforce schema consistency across the cluster, while controllers (namespace-scoped) only reconcile resources in their watched namespace(s).
  • RBAC markers control scope: The namespace= parameter in controller RBAC markers determines whether controller-gen generates Role (namespace-scoped) or ClusterRole (cluster-scoped). Without the namespace= parameter, controller-gen always generates ClusterRole.
  • Controller-gen regenerates role.yaml: After running make manifests, controller-gen will regenerate config/rbac/role.yaml based on your controller RBAC markers. The initial Role scaffold from kubebuilder edit --namespaced=true serves as a template, but controller-gen manages the actual content.
  • Namespace parameter format: Use namespace=<your-namespace> in controller RBAC markers, typically namespace=<project-name>-system to match your deployment namespace.
  • Metrics auth role stays cluster-scoped: The metrics-auth-role uses cluster-scoped APIs (TokenReview, SubjectAccessReview) and correctly remains a ClusterRole without namespace parameter.

See Also