HIVEDmind
AI coding at HIVED
Our engineers walk through their AI coding journey so far, and how they’ve enabled AI agents to work well across their codebase.

Andy Provan
·
4 min read

Individual engineers began experimenting with GitHub Copilot and Cursor in early 2025. The first time AI coding was demoed was in a HIVEDmind knowledge-sharing session, with Claude Sonnet 3.7 making basic UI changes that still needed manual nudging to produce working code. But it was enough to spark wider adoption. We tried different tools, compared notes, and settled on Claude. Subscriptions went out to every engineer shortly after.
What made the biggest difference was how we adapted the surrounding processes to support the AI coding agents.
Giving AI agents the right harnesses
The biggest unlock was building the feedback loops that let AI agents know quickly when they were going wrong.
For our Golang repositories, that meant making sure agents run our linters and tests after every change. Thanks to Go, these only take a few seconds on incremental runs, enabling tight iteration loops and fast corrections.
Our basic setup here, instructs AI agents to run our linters and tests. And, thanks to Go, these linters and tests only take several seconds on incremental runs.
A small but meaningful detail: without explicit instructions, agents default to running tests with verbose output. Whilst this works fine, it unnecessarily consumes tokens and context. We also found agents would default to running “go test -v ./…”, executing the full test suite with verbose logging enabled. By adding a single line to the agents file both behaviours are fixed, keeping runs lightweight whilst still giving enough visibility for debugging.
Following our code style
As mentioned in ”Why HIVED loves Go”, we use the lo library throughout our Go codebase. AI agents do not naturally reach for it, defaulting instead to more manual approaches that are perfectly valid but diverge from our conventions. Whilst this is perfectly fine, we’d prefer AI code to match our existing styles and conventions.
This is a small example of a broader principle: the AGENTS.md file is where you encode your team's conventions. Without it, you get code that works and with it, you get code that fits.
Keeping code quality high
One common problem with AI agents is that they’ve been trained on outdated code, which is no longer best practice. An example of this is Go, which had to make a copy of the range loop variables, since the variables are pointers to values that are updated on every iteration. Go has since solved this problem, but AI models have been trained on code written before Go made this fix. So they continue to output the no-longer-needed variable copy.
One method to solve this is in an AGENTS.md file, but this has significant drawbacks. Firstly, it could require being very comprehensive, with every possible “rule” that it needs to follow, which would then consume context even when it’s not needed.
Secondly, it would just be advice, which sometimes isn’t sufficient enough to overcome an AI’s inherent instinct.
Fortunately, Golang has a rich linter ecosystem, with one of them being. We added this to our golangci-lint config, which is then run by our AI agents. The AI agent still makes this mistake, but is then quickly corrected by the linter.
Hallucinations
Underlying AI models are improving rapidly, but we’ve still observed “hallucinations” even in models such as Claude Opus 4.6. Here’s a real example we saw, and a demonstration of how a good “harness” can guide an AI agent to still complete the main task, despite hallucinations, without having to revert to asking a human.
The AI agent incorrectly thinks a TryMap function exists within the lo library. But it can then get feedback very quickly, in the form of a compile error. And then, rather than wasting more time/tokens guessing whether there is a function, albeit with a different name. Our agents' file has directed the agent to look up what functions are available instead. After seeing that there are none, it then correctly proceeds with a different approach and continues with the rest of the task.
Preventing useless comments
We noticed that AI agents seemed to feel it necessary to always write a comment, even if it added no value. We prefer to avoid comments like this and add them only for special scenarios or when something isn’t immediately obvious.
Upping the ambition
Our ambition has expanded steadily over time, moving from inline edits to smaller features, and now to full integrations. We work with a wide range of integrations, with more continuously being added.
While some are more complex, many follow a standard pattern: receiving data in a defined format, transforming it into our internal structure, creating a parcel, and returning a label. These integrations are often well-documented and relatively repeatable, making them well suited to AI agents.
Off this prompt, the AI agent was able to iterate for 15 minutes, writing code, encountering compile errors, then linting errors, then finally integration testing errors, continually iterating, until finally declaring:
Shipping fast
Metrics are always imperfect, but the signal is clear. AI coding has let a small engineering team ship faster, spend less time on boilerplate, and stay focused on the problems that actually require human judgement. One of our HIVED One shippers put it better than we could:
“No software company or courier has ever implemented any development requests as quickly as you have” - HIVED One Shipper
This covers one backend project but we are incorporating AI across every project at HIVED, including frontend, and our data team has its own evolving set of practices. The tools and models are improving fast, and so is our understanding of how to get the most out of them.
To discover more about HIVEDmind and the projects we are undertaking at HIVED, check out the rest of the blogs we have published here.


