Quick Take: Setting up GoReleaser for a Kubernetes controller revealed both the power and limitations of AI assistance. While AI helped me discover and implement an unfamiliar tool, working through configuration challenges taught me valuable lessons about automated releases and the importance of validating AI-provided information.

Introduction Link to heading

In the first part of this series, I shared how AI helped me build a working Kubernetes admission controller. The second part covered securing that controller for production use. Now I want to share what turned out to be one of the more interesting aspects of this journey - setting up releases. This is where I discovered both the power and limitations of AI assistance, particularly around tools I’d never used before.

The Release Challenge Link to heading

My requirements for releases were straightforward:

  • Automated builds for multiple architectures (amd64 and arm64)
  • Container images published to GitHub Container Registry
  • Proper semantic versioning
  • Release notes and changelogs
  • Security scanning of released artifacts

When I asked Claude how to handle this, it suggested using GoReleaser - a tool I’d never worked with before. This was exactly the kind of situation where AI assistance could really shine: learning a new tool while implementing it in a real project.

Learning GoReleaser Through AI Link to heading

The initial configuration Claude suggested looked reasonable:

before:
  hooks:
    - go mod tidy

builds:
  - env:
      - CGO_ENABLED=0 # Disables cgo, producing static binaries
    goos:
      - linux
    goarch:
      - amd64
      - arm64
    main: ./cmd/webhook
    ldflags:
      - -s -w # Strip debugging info to reduce binary size
      - -X main.version={{.Version}}
      - -X main.commit={{.Commit}}
      - -X main.date={{.Date}}

I immediately recognized the value of setting CGO_ENABLED=0 for container images - it ensures we build static binaries that don’t depend on system libraries. The ldflags settings both optimize binary size (by stripping debug information) and inject version information at build time. This would let me access version details in the running application, which I knew would be particularly useful for logging and monitoring.

But when I tried to build the container image, I hit a wall. The standard Dockerfile that worked fine for my local development wasn’t playing nice with GoReleaser’s build process. I spent a frustrating afternoon trying different configurations before going back to Claude for help.

Finding the Right Approach Link to heading

The key insight came when Claude explained that GoReleaser needs a specialized Dockerfile that expects to find the binary already built. This wasn’t obvious from the documentation I had looked at. After some back-and-forth, I created a separate goreleaser.dockerfile:

FROM scratch
COPY add-pod-label /add-pod-label
ENTRYPOINT ["/add-pod-label"]

Along with the corresponding GoReleaser configuration:

dockers:
  - image_templates:
      - "ghcr.io/jjshanks/pod-label-webhook:{{ .Version }}"
      - "ghcr.io/jjshanks/pod-label-webhook:latest"
    dockerfile: goreleaser.dockerfile
    use: buildx
    build_flag_templates:
      - "--platform=linux/amd64"
      - "--label=org.opencontainers.image.source={{.GitURL}}"
      - "--label=org.opencontainers.image.created={{.Date}}"
      - "--label=org.opencontainers.image.version={{.Version}}"
      - "--label=org.opencontainers.image.revision={{.Commit}}"

This was a perfect example of learning through AI assistance - not just getting working code, but understanding why certain approaches work better than others. I realized that GoReleaser’s workflow fundamentally differs from my manual build process, and the tooling is designed with specific assumptions about how artifacts are created and packaged.

Setting Up the Release Workflow Link to heading

With the GoReleaser configuration sorted out, I worked with Claude to create a comprehensive GitHub Actions workflow that handles both automated and manual releases. The complete workflow includes some interesting features around release notes:

- name: Update release notes
  if: success()
  uses: actions/github-script@v7
  with:
    script: |
      const releaseNotes = `## ${tag}\n\n${release.data.body}\n\n---\n\nFor installation instructions and documentation, please visit our [documentation](docs/README.md).`;
      
      await github.rest.repos.updateRelease({
        owner: context.repo.owner,
        repo: context.repo.repo,
        release_id: release.data.id,
        body: releaseNotes
      });

GoReleaser automatically generates changelog entries based on the commits since the last release, but I wanted to enhance this with additional documentation links. The workflow first lets GoReleaser create its standard release notes, then appends our custom documentation section. This ensures users always have quick access to installation instructions right from the release page.

I also added validation for version numbers to ensure they follow semantic versioning:

VERSION_WITHOUT_V="${VERSION#v}"
SEMVER_REGEX="^(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-((0|[1-9][0-9]*|[0-9]*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9][0-9]*|[0-9]*[a-zA-Z-][0-9a-zA-Z-]*))*))?(\+([0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*))?$"

This regex ensures version numbers are properly formatted, supporting both standard releases (v1.2.3) and pre-releases with build metadata (v1.2.3-alpha.1+meta). The workflow won’t proceed if the version format is invalid, which helps maintain consistent versioning across all releases. I’d been burned before by inconsistent version formatting, so this validation step was important to me.

The Limitations of AI Assistance Link to heading

While AI was incredibly helpful throughout this process, I also encountered some limitations. At one point, Claude suggested Docker configuration options that didn’t actually exist in the current version of GoReleaser. It took me several hours of troubleshooting to realize that the AI was suggesting syntax from a future version of the tool that hadn’t been released yet.

This taught me an important lesson: while AI can accelerate learning and implementation, it’s still crucial to verify suggestions against official documentation. I now have a habit of cross-checking AI recommendations with the latest docs, especially for tools I’m unfamiliar with.

Another challenge was getting multi-architecture builds working correctly. The initial configuration only built for amd64, and when I asked about supporting arm64, the suggestions didn’t work as expected. I eventually figured out that GoReleaser’s approach to multi-arch builds has evolved significantly, and some of the examples Claude provided were outdated.

Learning from the Experience Link to heading

This part of the project taught me several valuable lessons about working with AI:

  1. Tool Discovery: AI can be great at suggesting tools you might not have considered. I wouldn’t have known about GoReleaser if Claude hadn’t suggested it, and it turned out to be exactly what I needed.

  2. Learning New Tools: Working with AI while learning a new tool provides a unique advantage - you get both the “what” (working configurations) and the “why” (explanations of how things work). This accelerated my understanding of GoReleaser considerably.

  3. Problem Solving: When something doesn’t work, having AI help debug and explain the underlying issues leads to better understanding. The Dockerfile issue was a perfect example - solving it taught me about how GoReleaser actually works.

  4. Verification is Essential: Always verify AI suggestions against official documentation, especially for tools you’re not familiar with. This saved me from several potential issues later in the project.

  5. Iterative Development: The process of getting releases working properly was iterative - starting with basic configurations, identifying issues, and gradually improving the setup. AI assistance made this process more educational and efficient.

Looking Forward Link to heading

Getting releases working properly was a crucial step toward having a production-ready controller. With automated builds, security scanning, and proper versioning in place, I’m almost there. In the next part of this series, I’ll look at monitoring and observability - ensuring I can effectively operate this controller in production.

The complete release configuration and workflows can be found in the GitHub repository along with example releases that show the setup in action.