Migrating to Namespace-Scoped Manager
This guide covers converting existing cluster-scoped projects to namespace-scoped deployment.
By default, Kubebuilder scaffolds cluster-scoped managers that watch and manage resources across all namespaces. This guide shows how to convert an existing cluster-scoped project to namespace-scoped deployment, limiting the manager to watch only specific namespace(s).
When to Use Namespace-Scoped
Use namespace-scoped when:
- Building tenant-specific managers in multi-tenant clusters
- Security policies require least-privilege (no cluster-wide permissions)
- Need multiple manager instances in different namespaces
- Managing only namespace-scoped resources (Deployments, Services, ConfigMaps, etc.)
Use cluster-scoped (default) when:
- Managing cluster-scoped resources (Nodes, ClusterRoles, Namespaces, etc.)
- Single manager instance managing resources across all namespaces
Migration Steps
Quick Summary:
- Run
kubebuilder edit --namespaced --force- scaffolds Role/RoleBinding and updates manager.yaml - Update cmd/main.go to configure namespace-scoped cache
- Add
namespace=parameter to RBAC markers in existing controller files - Run
make manifests- regenerate RBAC from updated markers - Verify and deploy
Detailed Steps:
1. Enable namespace-scoped mode
kubebuilder edit --namespaced --force
This command automatically:
- Sets
namespaced: truein your PROJECT file - Scaffolds
config/rbac/role.yamlwithkind: Role(namespace-scoped) - Scaffolds
config/rbac/role_binding.yamlwithkind: RoleBinding - Regenerates
config/manager/manager.yamlwith WATCH_NAMESPACE environment variable - Regenerates admin/editor/viewer roles with
kind: Role(namespace-scoped) for all existing APIs
Note: The --force flag regenerates config/manager/manager.yaml. Without --force, you must manually add WATCH_NAMESPACE (see below).
2. Update cmd/main.go (Required Manual Step)
The edit command cannot update cmd/main.go automatically. You must manually add namespace-scoped configuration.
a. Add import:
import (
// ... existing imports ...
"sigs.k8s.io/controller-runtime/pkg/cache"
)
b. Add helper functions (after init() and before main()):
// getWatchNamespace returns the namespace(s) the manager should watch for changes.
// It reads the value from the WATCH_NAMESPACE environment variable.
func getWatchNamespace() (string, error) {
watchNamespaceEnvVar := "WATCH_NAMESPACE"
ns, found := os.LookupEnv(watchNamespaceEnvVar)
if !found {
return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar)
}
return ns, nil
}
// setupCacheNamespaces configures the cache to watch specific namespace(s).
func setupCacheNamespaces(namespaces string) cache.Options {
defaultNamespaces := make(map[string]cache.Config)
for _, ns := range strings.Split(namespaces, ",") {
defaultNamespaces[strings.TrimSpace(ns)] = cache.Config{}
}
return cache.Options{
DefaultNamespaces: defaultNamespaces,
}
}
c. In main() function, before ctrl.NewManager(), add:
// Get the namespace(s) for namespace-scoped mode from WATCH_NAMESPACE environment variable.
watchNamespace, err := getWatchNamespace()
if err != nil {
setupLog.Error(err, "Unable to get WATCH_NAMESPACE")
os.Exit(1)
}
d. Update manager creation to use namespace-scoped cache:
mgrOptions := ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "your-leader-election-id",
// ... other existing options ...
}
// Configure cache to watch namespace(s) specified in WATCH_NAMESPACE
mgrOptions.Cache = setupCacheNamespaces(watchNamespace)
setupLog.Info("Watching namespace(s)", "namespaces", watchNamespace)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), mgrOptions)
if err != nil {
setupLog.Error(err, "Failed to start manager")
os.Exit(1)
}
3. Update RBAC markers in existing controllers
For each existing controller file, add the namespace= parameter to RBAC markers.
Find controller files:
- Look for files containing
func (r *SomeReconciler) Reconcile( - Common locations:
internal/controller/*_controller.go
In internal/controller/cronjob_controller.go:
Before (cluster-scoped):
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
After (namespace-scoped):
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,namespace=<project-name>-system,resources=cronjobs/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
Replace project-system with your namespace (found in config/default/kustomization.yaml under the namespace: field).
4. Regenerate RBAC manifests
After updating RBAC markers in Step 3, regenerate the RBAC manifests:
make manifests # Regenerate RBAC from updated controller markers
Verify the generated files show kind: Role instead of kind: ClusterRole:
config/rbac/role.yaml:
kind: Role
metadata:
name: manager-role
# Note: namespace is added by kustomize during build, not in source
**config/rbac/*_editor_role.yaml, _viewer_role.yaml, _admin_role.yaml:
kind: Role
metadata:
name: cronjob-editor-role
# Note: namespace is added by kustomize during build, not in source
5. Verify and deploy
Run tests to verify everything works:
make generate # Regenerate code
make test # Run tests
Deploy and verify:
make deploy IMG=<your-image>
# Verify RBAC is namespace-scoped (not cluster-scoped)
kubectl get role,rolebinding -n <manager-namespace>
# Test: Create a resource in the manager's namespace - should be reconciled
kubectl apply -f config/samples/ -n <manager-namespace>
# Test: Create a resource in a different namespace - should NOT be reconciled
kubectl apply -f config/samples/ -n other-namespace
AI-Assisted Migration
If you’re using an AI coding assistant (Cursor, GitHub Copilot, etc.), you can automate the manual migration steps.
Multi-Namespace Support
The WATCH_NAMESPACE environment variable supports comma-separated values to watch multiple specific namespaces:
env:
- name: WATCH_NAMESPACE
value: "namespace-1,namespace-2,namespace-3"
Note: You’ll need to create Role/RoleBinding in each namespace for proper RBAC.
Reverting to Cluster-Scoped
To revert back to cluster-scoped:
kubebuilder edit --namespaced=false --force
This command automatically:
- Sets
namespaced: falsein your PROJECT file - Scaffolds
config/rbac/role.yamlwithkind: ClusterRole - Scaffolds
config/rbac/role_binding.yamlwithkind: ClusterRoleBinding - With
--force: Regeneratesconfig/manager/manager.yamlwithout WATCH_NAMESPACE env var
Manual steps required:
- Remove
namespace=parameter from RBAC markers in all controller files - Run
make manifeststo regenerate cluster-scoped RBAC - Remove namespace-scoped code from
cmd/main.go:- Remove
getWatchNamespace()function - Remove
setupCacheNamespaces()function - Remove namespace retrieval and cache configuration
- Remove added imports (
fmt,strings,cache) if not used elsewhere
- Remove
- If you didn’t use
--force, manually removeWATCH_NAMESPACEfromconfig/manager/manager.yaml
Important Notes
- Only controllers need RBAC updates: Only update
+kubebuilder:rbacmarkers in controller files (files withReconcilefunction). Webhook files do NOT use RBAC markers - webhooks use certificate-based authentication with the API server. - Webhooks remain cluster-scoped:
ValidatingWebhookConfigurationandMutatingWebhookConfigurationare cluster-scoped resources that validate/mutate CRs in all namespaces. This is correct - webhooks enforce schema consistency across the cluster, while controllers (namespace-scoped) only reconcile resources in their watched namespace(s). - RBAC markers control scope: The
namespace=parameter in controller RBAC markers determines whether controller-gen generatesRole(namespace-scoped) orClusterRole(cluster-scoped). Without thenamespace=parameter, controller-gen always generatesClusterRole. - Controller-gen regenerates role.yaml: After running
make manifests, controller-gen will regenerateconfig/rbac/role.yamlbased on your controller RBAC markers. The initialRolescaffold fromkubebuilder edit --namespaced=trueserves as a template, but controller-gen manages the actual content. - Namespace parameter format: Use
namespace=<your-namespace>in controller RBAC markers, typicallynamespace=<project-name>-systemto match your deployment namespace. - Metrics auth role stays cluster-scoped: The
metrics-auth-roleuses cluster-scoped APIs (TokenReview, SubjectAccessReview) and correctly remains a ClusterRole without namespace parameter.
See Also
- Manager Scope - Detailed explanation of manager scope concepts
- Project Config - PROJECT file configuration reference