Quick Take Link to heading

I combined Cobra and Viper to create a highly configurable Kubernetes controller that handles command-line flags, environment variables, and configuration files with clear precedence rules and strong validation.

Introduction Link to heading

In the previous parts of this series, we covered building a basic admission controller, implementing security measures, and setting up releases. Now let’s dive into one of the more interesting aspects of the project - implementing flexible configuration handling using Cobra and Viper.

While I had previous experience with Cobra for building CLI applications, Viper was new to me. It was another example of AI suggesting a tool that turned out to be exactly what I needed. The combination of Cobra and Viper allowed us to create a highly configurable controller that can be customized through environment variables, command-line flags, or configuration files.

The Configuration Challenge Link to heading

When deploying applications to Kubernetes, you often need to support multiple configuration methods:

  • Command-line flags for local development and testing
  • Environment variables for container deployments
  • Configuration files for complex settings
  • Sensible defaults for quick starts

Our admission controller needed to handle all these cases while maintaining clear precedence rules and validation. The configuration also needed to be easily testable and well-documented.

Implementing Configuration with Cobra and Viper Link to heading

The core of our configuration system lives in the config package. Here’s how I structured the configuration:

type Config struct {
    // Server configuration
    Address         string        // The address and port to listen on (e.g., "0.0.0.0:8443")
    CertFile        string        // Path to the TLS certificate file
    KeyFile         string        // Path to the TLS private key file
    GracefulTimeout time.Duration // Maximum time to wait for server shutdown

    // Logging configuration
    LogLevel string // Log level (trace, debug, info, warn, error, fatal, panic)
    Console  bool   // Whether to use console-formatted logging instead of JSON
}

The main application uses Cobra to define the command structure:

var (
    cfgFile string
    
    rootCmd = &cobra.Command{
        Use:   "webhook",
        Short: "Kubernetes admission webhook for pod labeling",
        Long:  `A webhook server that adds labels to pods using Kubernetes admission webhooks`,
        PreRunE: func(cmd *cobra.Command, args []string) error {
            cfg, err := config.LoadConfig(cfgFile)
            if err != nil {
                return err
            }

            if err := cfg.Validate(); err != nil {
                return err
            }

            cfg.InitializeLogging()
            return nil
        },
        RunE: func(cmd *cobra.Command, args []string) error {
            cfg, err := config.LoadConfig(cfgFile)
            if err != nil {
                return err
            }
            server, err := webhook.NewServer(cfg)
            if err != nil {
                return err
            }
            return server.Run()
        },
    }
)

I found structuring the code this way provided a clean separation between configuration loading, validation, and the actual server execution. This approach made the codebase more maintainable and easier to test.

Configuration Loading and Validation Link to heading

One of the most valuable suggestions from the AI was implementing robust configuration loading and validation. Here’s how I handle loading configuration from multiple sources:

func LoadConfig(cfgFile string) (*Config, error) {
    config := New()

    // Set up viper for environment variables
    viper.SetEnvPrefix("WEBHOOK")
    viper.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
    viper.AutomaticEnv()

    // Load configuration file if specified
    if cfgFile != "" {
        viper.SetConfigFile(cfgFile)
        if err := viper.ReadInConfig(); err != nil {
            if _, ok := err.(viper.ConfigParseError); ok {
                return nil, fmt.Errorf("error parsing config: %v", err)
            }
            return nil, fmt.Errorf("error reading config file: %v", err)
        }
        log.Info().Str("config", viper.ConfigFileUsed()).Msg("Using config file")
    }

    // Update config from viper (environment variables or config file values)
    if viper.IsSet("address") {
        config.Address = viper.GetString("address")
    }
    // ... more configuration loading ...

    return config, nil
}

The configuration system follows a clear precedence order:

  1. Command line flags
  2. Environment variables
  3. Configuration file
  4. Default values

This approach provides flexibility for different deployment scenarios while maintaining predictable behavior.

Validation and Type Safety Link to heading

One of the more interesting challenges was implementing proper validation while maintaining type safety. The AI helped create a comprehensive validation system:

func (c *Config) Validate() error {
    // Validate logging configuration
    if _, err := zerolog.ParseLevel(c.LogLevel); err != nil {
        return fmt.Errorf("invalid log level %q: %v", c.LogLevel, err)
    }

    // Validate address format
    host, port, err := net.SplitHostPort(c.Address)
    if err != nil {
        return fmt.Errorf("invalid address format %q: %v", c.Address, err)
    }

    // Validate port
    if _, err := net.LookupPort("tcp", port); err != nil {
        return fmt.Errorf("invalid port %q: %v", port, err)
    }

    // Validate host if specified
    if host != "" && host != "0.0.0.0" {
        if ip := net.ParseIP(host); ip == nil {
            return fmt.Errorf("invalid IP address: %q", host)
        }
    }

    // Validate graceful timeout
    if c.GracefulTimeout <= 0 {
        return fmt.Errorf("graceful timeout must be positive, got %v", c.GracefulTimeout)
    }

    return nil
}

I found that thorough validation saved me countless hours of debugging later. For example, catching network address issues early prevented difficult-to-diagnose connection problems when deploying to Kubernetes.

Testing Configuration Link to heading

Testing the configuration system thoroughly was essential. I needed to ensure it worked correctly across different scenarios and input methods. Here’s an example of testing different configuration scenarios:

func TestLoadConfig(t *testing.T) {
    tests := []struct {
        name       string
        configFile string
        envVars    map[string]string
        want       *Config
        wantErr    bool
        errMsg     string
    }{
        {
            name:       "load from valid config file",
            configFile: validConfigFile,
            want: &Config{
                Address:  "127.0.0.1:8443",
                CertFile: "/custom/cert/path",
                KeyFile:  "/custom/key/path",
                LogLevel: "debug",
                Console:  true,
            },
        },
        {
            name:       "load from environment",
            configFile: "",
            envVars: map[string]string{
                "WEBHOOK_ADDRESS":   "localhost:8443",
                "WEBHOOK_LOG_LEVEL": "debug",
            },
            want: &Config{
                Address:  "localhost:8443",
                LogLevel: "debug",
            },
        },
        // More test cases...
    }
    // Test implementation...
}

Lessons Learned Link to heading

Implementing configuration with Cobra and Viper taught me several valuable lessons:

  1. Validate early and thoroughly: Comprehensive validation at startup catches configuration errors before they cause runtime issues.
  2. Test all configuration paths: Each configuration source (defaults, files, environment variables, command-line flags) needs thorough testing.
  3. Document configuration options: Clear documentation of all configuration options and their precedence is essential for users.

Looking Forward Link to heading

With a robust configuration system in place, our controller is becoming more production-ready. The combination of Cobra and Viper gives us the flexibility to deploy in various environments while maintaining strict validation and type safety.

In the next part of this series, I’ll look at implementing integration tests to ensure we safely merge automated code from dependabot and do verification before releasing.

The complete configuration implementation can be found in the GitHub repository, including all validation logic and test cases.