Quick Take Link to heading

Using AI as a code reviewer significantly improved my Kubernetes controller’s quality by catching security issues and suggesting usability enhancements that I might have missed. This approach accelerated the feedback cycle and allowed me to focus on architectural decisions while the AI handled routine code quality checks.

Introduction Link to heading

In the previous parts of this series, I’ve shared how I built, secured, and added observability to a Kubernetes admission controller with AI assistance. While working with AI as a coding partner was valuable, I realized that having another set of eyes review the code could identify blind spots and further improve quality.

This led me to explore AI-powered code reviewers. After evaluating several options, I settled on Sourcery AI, as it integrated smoothly with my workflow and provided the most useful feedback out of the box. I decided to use their free offering for open source projects to complement my development process.

The AI Code Review Workflow Link to heading

My workflow involved submitting pull requests and having Sourcery automatically review them. I’d then take these comments and feed the valuable ones back to the AI I was using for coding assistance, creating a continuous improvement loop. This created a multi-AI development process where different systems could provide complementary perspectives on the same code.

What made this approach particularly effective was how it mirrored real-world development practices. Even when pair programming with AI, having a separate reviewer – whether human or artificial – helps identify issues that both participants in the pairing session might overlook.

How AI Reviews Improved the Controller Link to heading

The AI reviewer provided several types of valuable feedback that directly improved the quality of my Kubernetes controller:

Security Enhancements Link to heading

Security improvements were among the most valuable contributions. For example, Sourcery identified a potential TLS vulnerability that I had completely overlooked:

**🚨 suggestion (security):** Consider setting TLS minimum version to ensure secure communications

Add TLSConfig with MinVersion set to at least tls.VersionTLS12 to prevent use of older, less secure TLS versions

Suggested implementation:

```golang
import (
	"crypto/tls"
	// ... other imports ...
)

func Run(address string) error {
	server := &http.Server{
		Addr:              address,
		Handler:           mux,
		ReadHeaderTimeout: 10 * time.Second,
		WriteTimeout:      10 * time.Second,
		IdleTimeout:       120 * time.Second,
		TLSConfig: &tls.Config{
			MinVersion: tls.VersionTLS12,
		},
	}

Note: You’ll need to ensure the crypto/tls import is properly merged with any existing imports in the file. If there’s already an import block, add “crypto/tls” to it rather than creating a new import block.


This suggestion was particularly valuable because it addressed a security issue that wasn't caught by either my manual review or the AI I was coding with. After seeing this feedback, I implemented an even stronger security posture by moving to TLS 1.3 as the minimum version.

### Comprehensive PR Summaries

One of the most impressive aspects of the AI reviewer was its ability to generate detailed PR summaries that captured both what changed and why it mattered. These summaries included visualizations that made it easier to understand the impact of changes:

Reviewer’s Guide by Sourcery Link to heading

This PR adds CLI functionality to allow users to override the default webhook server address and port. The implementation uses Cobra for command-line argument parsing and includes validation for the address format, port, and host IP.

Sequence diagram for webhook server startup with CLI address Link to heading

sequenceDiagram
    actor User
    participant CLI as Command Line Interface
    participant Main as main
    participant Webhook as webhook

    User->>CLI: Execute webhook command with --address
    CLI->>Main: Parse address
    Main->>Main: Validate address format
    Main->>Webhook: Run(address)
    Webhook->>Webhook: Start server on address
    Webhook-->>User: Server running on specified address

Class diagram for CLI address override feature Link to heading

classDiagram
    class main {
        +string address
        +cobra.Command rootCmd
        +init()
        +main()
    }
    class webhook {
        +Run(address string) error
    }
    main --> webhook : uses
    note for main "The main package now includes a Cobra command for CLI parsing and address validation."

File-Level Changes Link to heading

Change Details Files
Added command-line interface support using Cobra Introduced a root command with description and usage informationAdded –address flag with default value of 0.0.0.0:8443Implemented address validation in PreRunE hookUpdated main function to use Cobra’s Execute() pkg/webhook/cmd/main.go
Modified webhook server to support configurable address Updated Run function to accept address parameterModified server configuration to use provided addressUpdated log message to show configured address pkg/webhook/webhook.go
Updated project dependencies Added github.com/spf13/cobra dependencyAdded related indirect dependencies go.modgo.sum

These detailed summaries helped me better understand the impact of my own changes and maintain documentation alongside the code. The visual representations made it easier to explain the changes to others and provided a historical record of architectural decisions.

Comparing AI and Human Code Reviews Link to heading

Having worked with both human and AI reviewers, I noticed several key differences:

Speed and Availability Link to heading

The most obvious advantage of AI code review was speed. Sourcery provided feedback within minutes of submitting a PR, while human reviewers might take hours or days, depending on their availability. This rapid feedback loop accelerated development significantly.

Focus and Reasoning Link to heading

I found that the AI reviewer excelled at providing clear reasoning for its suggestions, often with concrete implementation examples. This approach helped me understand not just what to change, but why the change was necessary—a practice that isn’t always present in human reviews.

Limitations Link to heading

Like all current AI tools, Sourcery showed knowledge limitations similar to other large language models. It occasionally suggested approaches using older versions of libraries or patterns that weren’t ideal for the latest Go or Kubernetes releases. When this happened, I had to rely on my experience to evaluate whether the suggestion was appropriate.

Practical Benefits for Kubernetes Development Link to heading

For Kubernetes controller development specifically, the AI reviewer provided several key benefits:

  1. Security Validation: It caught several security issues related to TLS configuration, file permissions, and input validation that are particularly important for controllers running with elevated permissions.

  2. Configuration Best Practices: The reviewer consistently suggested improvements to configuration handling, which is crucial for Kubernetes controllers that need to be highly configurable.

  3. Documentation Improvements: It prompted me to improve documentation in areas where it was lacking, enhancing the maintainability of the controller.

Recommendations for Using AI Code Review Link to heading

Based on my experience, here are some recommendations for developers interested in incorporating AI code review into their workflow:

  1. Use AI Review as a First Pass: Have AI review your code first to catch common issues, then let human reviewers focus on architecture and business logic.

  2. Evaluate Suggestions Critically: Not all AI suggestions will make sense for your specific context. Evaluate each one based on your project’s needs.

  3. Create a Feedback Loop: Feed valuable AI review comments back into your development process to improve future code.

  4. Combine Multiple AI Tools: Using different AI tools for development and review can provide complementary perspectives.

Looking Forward Link to heading

I believe AI code review represents one of the most promising applications of AI in software development. Unlike code generation, which often requires significant human guidance, code review is a more constrained problem where AI can provide immediate value.

In the future, I expect AI reviewers to become even more specialized for specific domains like Kubernetes, with deeper knowledge of best practices and common pitfalls. They’ll likely move beyond catching issues to suggesting architectural improvements and performance optimizations.

For now, I plan to continue using AI review for this controller project, especially as I add tracing capabilities in the next part of this series. The combination of AI coding assistance and AI review has significantly improved both my productivity and the quality of the final product.

Conclusion Link to heading

Incorporating AI as a code reviewer has been a valuable addition to my Kubernetes controller development process. While it doesn’t replace human reviewers, it provides a fast first line of defense against common issues and encourages better coding practices.

The specific examples of security improvements and detailed PR summaries demonstrate how AI review can catch issues that might otherwise slip through. For developers working on Kubernetes controllers or similar projects, I highly recommend experimenting with AI code reviewers to complement your existing development workflow.

In the next part of this series, I’ll explore adding distributed tracing to the controller, which represents the final major component needed to make it truly production-ready.