Quick Take Link to heading
I built a working Kubernetes admission controller in just two hours with Claude’s help after months of frustration with other approaches. This post shares my journey through the challenges, breakthroughs, and key learnings from building a production-ready controller with AI assistance.
Introduction Link to heading
Since starting with Kubernetes in 2019, I’ve encountered numerous complex problems. One recurring issue particularly frustrated me: deployments failing because developers had set their Pod Distribution Budget (PDB) to allow only one pod to be unavailable when their deployment only had a single replica. This effectively meant the pod could never be evicted or moved, blocking normal cluster operations.
I knew an admission controller that validated PDBs against their deployments would solve this elegantly. The controller could catch this mismatch before the PDB was created, preventing the issue entirely. But every time I attempted to build one, I hit a wall. The official documentation pointed to the test admission controller used in the Kubernetes release process as an example, but understanding how all the pieces fit together was overwhelming.
My first attempt to overcome this barrier was with ChatGPT. I assumed AI assistance could help navigate the complexity. While ChatGPT provided code snippets from the Kubernetes repository, it wasn’t sufficient. I needed more than just code—I needed to understand how everything worked together.
Then I got access to Claude and, during a football game of all times, finally made a breakthrough. This is the story of how two hours of AI pair programming accomplished what I hadn’t been able to figure out for months.
The Initial Attempts with ChatGPT Link to heading
I first approached ChatGPT with optimism—surely AI could help me structure this admission controller. That optimism quickly faded. ChatGPT generated what appeared to be reasonable code, but attempting to run it revealed deeper problems. The code lacked crucial components, and ChatGPT’s project structure suggestions felt disconnected from typical Kubernetes project organization.
I spent hours debugging issues, but each attempt led me in circles. ChatGPT would suggest one fix, then another, without addressing the fundamental problems. It felt like trying to build a house without knowing where to place the foundation.
Eventually, I loaded the entire Kubernetes codebase into my IDE to track down the missing pieces myself. While this helped me identify what was missing, it became clear that this approach wouldn’t help me create an MVP as quickly as I wanted. Frustrated by hours of effort with minimal progress, I shelved the project again.
Enter Claude Link to heading
At my new job, I could choose between ChatGPT and Claude. I’d heard Claude excelled at handling code and writing tasks, so I decided to try it. Over the next month, I used Claude for various projects, learning how to work with an AI that maintained context and handled complex technical discussions.
During Thanksgiving break, with a football game on TV, I wondered if Claude could succeed where ChatGPT had failed. I started a new chat with a simple prompt: build a mutating admission controller that adds a “hello=world” label to resources. I figured this would be a good test case; while I no longer needed the PDB validation controller, I was curious to see how much of a Kubernetes project Claude could help me build.
Building the Controller Link to heading
My journey began with a straightforward prompt: Write a Kubernetes admission controller that adds the label hello=world to every pod
. Claude responded by generating a complete controller project structure. Here’s the initial code it produced:
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
admissionv1 "k8s.io/api/admission/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
)
var (
runtimeScheme = runtime.NewScheme()
codecs = serializer.NewCodecFactory(runtimeScheme)
deserializer = codecs.UniversalDeserializer()
)
// add the label to pod
func mutatePod(pod *corev1.Pod) {
if pod.Labels == nil {
pod.Labels = make(map[string]string)
}
pod.Labels["hello"] = "world"
}
// handle admission requests
func handleMutate(w http.ResponseWriter, r *http.Request) {
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read request body", http.StatusBadRequest)
return
}
// verify the content type is accurate
contentType := r.Header.Get("Content-Type")
if contentType != "application/json" {
http.Error(w, "invalid Content-Type, want `application/json`", http.StatusUnsupportedMediaType)
return
}
// parse admission review request
var admissionResponse *admissionv1.AdmissionResponse
ar := admissionv1.AdmissionReview{}
if _, _, err := deserializer.Decode(body, nil, &ar); err != nil {
http.Error(w, fmt.Sprintf("could not decode body: %v", err), http.StatusBadRequest)
return
}
// get the Pod object
pod := &corev1.Pod{}
if err := json.Unmarshal(ar.Request.Object.Raw, pod); err != nil {
http.Error(w, fmt.Sprintf("could not unmarshal pod object: %v", err), http.StatusBadRequest)
return
}
// create patch
mutatePod(pod)
patchBytes, err := json.Marshal(pod)
if err != nil {
http.Error(w, fmt.Sprintf("could not marshal JSON patch: %v", err), http.StatusInternalServerError)
return
}
// create response
admissionResponse = &admissionv1.AdmissionResponse{
Allowed: true,
Patch: patchBytes,
PatchType: func() *admissionv1.PatchType {
pt := admissionv1.PatchTypeJSONPatch
return &pt
}(),
}
// prepare response
ar.Response = admissionResponse
resp, err := json.Marshal(ar)
if err != nil {
http.Error(w, fmt.Sprintf("could not encode response: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.Write(resp)
}
func main() {
http.HandleFunc("/mutate", handleMutate)
fmt.Println("Starting webhook server on :8443")
if err := http.ListenAndServeTLS(":8443", "cert.pem", "key.pem", nil); err != nil {
panic(err)
}
}
Getting the basic structure was just the beginning. I needed to test this controller, which meant integrating cert-manager for TLS certificate handling. Claude helped adapt the code to fetch and use these certificates properly, though occasionally it included unused imports that needed cleanup.
For testing, we set up a framework using kind (Kubernetes in Docker). Claude helped create a Makefile that automated the entire process—spinning up a test cluster, deploying the controller, and cleaning up afterward. This is where we encountered our first real debugging challenge: the controller pod wouldn’t start. Examining Kubernetes events showed the pod was stuck in a pending state, waiting for admission.
The problem became clear after I added more detailed logging—the controller needed to handle its own admission request but wasn’t running yet, creating a deadlock situation. After several iterations of logging improvements and event inspection with Claude’s guidance, we fixed the startup sequence.
From there, we added comprehensive test coverage, including both unit and integration tests. But I wanted this to be more than just a demo—it needed to be production-ready. That meant adding Dependabot configuration and ensuring integration tests would run on dependency update pull requests. We completed the implementation by setting up GitHub Actions following best practices and addressing all security alerts.
You can view the original commit and the first PR on my GitHub.
Key Learnings Link to heading
The most striking outcome was how much of the final code came directly from Claude—over 95% of the code in the final implementation was AI-generated. This wasn’t just boilerplate code either; it included proper error handling, test coverage, and infrastructure setup.
However, this success came with an important caveat. My experience with both Go and Kubernetes played a crucial role. I knew what components were needed for a working admission controller, how to debug Kubernetes events, and what a proper project structure should look like. Without this background, it would have been much harder to guide the AI or validate its output. Someone new to Go or Kubernetes would likely struggle to achieve the same results, even with identical AI assistance.
When I hit roadblocks with package imports or runtime errors, my experience helped me quickly identify solutions that might have taken a newcomer hours to figure out. For example, when our controller encountered certificate issues, I immediately knew to check the cert-manager logs rather than assuming it was a code problem.
Looking Forward Link to heading
While creating a working admission controller was satisfying, it’s just the beginning. A simple mutating controller that adds a label is perfect for learning, but running anything in production requires several additional layers of consideration.
In the next part of this series, we’ll explore properly securing the controller—critical groundwork before considering production deployment.
From there, we’ll examine the release process, where we’ll discover interesting challenges around AI’s limitations with newer tools like goreleaser. Then we’ll tackle configuration management, testing, and observability—all the components that transform a working prototype into production-ready software.