Skip to main content
OneAdvanced Software (return to the home page)

Careers

The AI race and how we fully embraced velocity

by Nick HeapPublished on 20 July 2025 12 minute read

Careers Blog Hero Image

Thinking about Thinking

I’ve encouraged my kids from a young age to start thinking about thinking. To start questioning why the human experience is like it is. And one thing I’m finding at my local Philosophy pub meetup, which like book groups is generally a wrapper around socialising and wine, is that try as I might we always end up talking about AI.

That moment you see a streamed response coming from an AI and realise that there is no fully formed answer before the first word part is produced. And then the next time you speak a sentence to someone and realise that you are doing the exact same thing as the words literally trip off the tongue. Even typing these words now, the same thing. It’s the future that “Phil/Psych University age me” was hoping for as I did my basic LISP project and wrote my Philosophy dissertation about the possibilities of Machine Consciousness (it was terrible btw!).

That jaw drop about what is technically possible reminds me very much of my first time in VR. I was amazed by the quality and immersion, then when I took off the headset I was mesmerized by the pure red “texture” of a multi-pack of KitKats that was on my table. It’s no wonder that people are talking more and more about being inside a simulation.

And then you get to where augmented/mixed reality is now and your perception of the world is forever transformed. It’s probably best summed up by my favourite quote from William Gibsons Mona Lisa Overdrive “There is no there, there”[i] (Originally from Gertrude Stein in 1937 about her childhood home no longer having its essence)[ii] where he refers to cyberspace being there but also literally not there.

It's great to live in these times of digital beauty like Jack Black famously sings about of Red Dead Redemption 2[iii] being like expletive Shakespeare[iv].

Assistance vs doing it for you

But AI is not all great. One of my School of Code[v] Mentees put out a LinkedIn post about a tech event we had both been to and it had that smell of AI to it. All super excited like the Influencers on YouTube (my son torments me with them but then he’s gen Alpha so I just call him Skibidi[vi] to get even). When I suggested that to them they said oh yeah I see that now! In person they were eloquent and expressed themselves well so there was no need to AI it. My new term for this is TAIDR (too AI didn’t read)!

I really don’t like reading text where someone has taken bullet points and fleshed it out with AI. You have this weird flow where someone uses AI to inflate some sparse text to send you it in an email and then you deflate it by asking an AI to summarise. Or knowledge base wastelands flooded with verbose documentation written by AI when humans are all very too long didn’t read (TLDR) and would anyway rather just send a Teams message. This is just losing meaning in the process. Please just send me the bullet points!

It brings to mind developers that will rewrite already available docs into their own version on Confluence, or a blog rather than just say “Hey this is great over here!”. Soulless shallow copies with no depth and nothing of themselves added.

So, we wanted to position our AI very much in that assistance role where humans are still creating but they are given the tools to handle the information overload. And surprisingly enough the feature we saw the most interest in through our Proof of Concept was summarising documents or data. Yes, TLDR all the things! People just didn’t have time to prioritise all the documents they would receive in a day without first just getting a feeling of what they were going to read. And they wanted to get a feel of their data without having to manually write reports and queries.

Prototyping the UK’s first sovereign AI

And from there we had the realisation that OneAdvanced is in the perfect position to offer a sovereign AI to our customers that protects their data and documents and gives them a safe environment to reap the benefits of AI augmentation. We took our best ‘gotta go fast’[vii] team and spent a crazy two weeks just seeing what was possible.

Python is the natural language of AI so we tooled up for that even though most of our Platform is based on Javascript Lambda functions. As I say to my School of Code mentees who learn HTML/CSS/Javascript: “You did learn Python as well right?”.

Accelerated by Amazon Bedrock[viii] we got our base chat completion capability deployed. A Bedrock Agent to access UK legislation so that we could offer up-to-date legal advice came the next day. Then onto an Agent that could query a demo customers data from Snowflake[ix] and produce graphical tables in the response messages. Then I asked the question ‘We should be able to do graphs...right?’: 2 hours later we taught the agent how to make chart.js graphs from our data and it felt wonderful. Like when you start to learn coding it feels like some magical incantation, and this was a real OMG reaction all round.

This was vibe coding at its finest with 25+ years of experience behind it. No restrictions Startup speed. This was so much fun that putting in some extra hours actually felt like a treat! It’s like my pet Mixed Reality side project, pushing at the edge of the possible where it just doesn’t feel like work.

We demoed this to the business and the response was off the scale!

Now make it properly

We kept nothing from that demo and now we needed to do things for real.

This meant proper deployment pipelines, unit testing, linting, vulnerability scanning, observability etc.

One thing I learnt at my time at DHL Parcel (during the pandemic level parcel volumes) was the sooner you can get your feature out the sooner your business can start to reap the advantages. And if we wanted to be first, we needed to really embrace daily deployments. That means day-0 pipelines to deploy ‘no content’ modules. Then small deployments at least once a day. And our internal users loved seeing this rate of change that they just weren’t used to with the standard deploy at the end of the sprint mentality.

Some of my team needed a little persuading that we could work like this but the stack of done tickets, the biggest I’ve ever seen on a project, helped really keep us motivated. Gamification theory working wonders!

One of the challenges we had with our vision of keeping everything isolated and in the UK was that we had leant heavily on the excellent Amazon Bedrock as an accelerator and that was no longer an option. So we had to quickly learn Amazon Sagemaker[x] and run our own Llama 3 in our own UK accounts.

At the end of our first 2 week sprint we had our UI sending the full conversation to the backend each time alongside file content that the user had selected to include.

At the end of the second 2 week sprint we had switched this to uploading to S3 and resolving the file content server side.

This and Local browser chat history, guardrails, PII detection, formed our MVP product.

Observability

Another DHL Parcel lesson was that everyone loves graphs! I’d have a screen always on dedicated to watching the performance of the production code I’d written in true DevOps style (I do the same now with our AI service). Every morning of the pandemic when all the drivers started to scan millions of parcels onto their vans it was so intense watching the graphs rapidly rise and wonder if the systems were going to cope with the load. Interesting fact: every performance graph you watch will eventually have a line that looks like Batmans cowl[xi] (my HR therapy cat Cuculla also looks like this)!

If you are going to build a service and keep it running, you need that kind of great observability. Effective logging in a format that one of the observability platforms can use is key here. Also removing out the spam of debug logging, that you might find highly valuable in localhost, from your production environment so that you can focus on what is happening is important. Logging how long things take and who is doing them is going to give you so much better insight so that you can teams message a tester “Hi, what did you just do?”.

This is one of the reasons for DevOps: Once you have to write the observability queries yourself, you’ll know why good logging is important.

Pace of change

The pace of change in AI is phenomenal and I’m definitely in team ‘Singularity is already here’[xii]. This makes tying something down to produce a workable product difficult. We picked our LLM model and weekly were questioning if we needed to move to the next best thing. It is just that fast. Our more technical consumers were asking “When are you moving to version x”.

I can see this is going to be difficult in the future as stability for our product is key to what we are offering, and we are not trying to win some artificial test here we are trying to deliver consistent AI assistance to business.

We figure that, despite the cost, we are going to have to run parallel LLM models to deliver that stability yet not get left behind. This is certainly our plan in introducing Llama 4 alongside our existing Llama 3 until we can verify that the upgrade is not going to change the experience for our users in any detrimental way. As Uhtred in The Last Kingdom[xiii] says “Destiny is All!” but in our case “Stability is All!”.

Reasoning

During our first POC we found that people really liked how the AI was able to give references and reasoning steps. It helps take things away from ok well that is some words or data from somewhere I’m not sure I can trust that. Like when you are talking to someone you are also talking to yourself and sometimes you jump in and say, ‘hang on no that’s not what I meant’. You want to see that the AI has looked at what it is saying and is properly reasoning about it.

With the switch away to Amazon Sagemaker and Llama 3 we don’t yet have full reasoning back in place but we do have references of every document that is pulled in through RAG and any web search results or pages that are scanned. This allows our users to verify the sources that have been used and go and check in more depth to reassure themselves about the information being provided.

It’s also ethically important that we are able to show that there is no bias or manipulation in the results that we return. This is one thing about those apps you download that promise free AI; you have to “follow the money”[xiv]. There is no way to know how things are pushed a tiny bit in this or that direction or what points of view are suppressed.

The benefits of Platform

We were very lucky at OneAdvanced that we are well on our journey to delivering our core software as a platform. This was a real accelerator for us to not have to build everything from the ground up but leverage our UI standards and base platform services.

And the experience of taking disparate products and bringing them together for platform has really powered up our platform team; an amazing group of talented people. Bringing back memories of the frenzied year 2000 bug squashing over thousands of lines of code, we uplifted all our core and product modules multiple times. For all those of you thinking that sounds terrible, hands up here I actually really love the huge challenge of taking a known starting state and getting it to a much better finish state. It’s the feeling of a modern-day board game, like Ticket to Ride[xv] where you have a plan on turn one and manage to pull it off by the end of the game!

Winning

So we did it. We delivered a great AI solution in record time. The first UK sovereign AI product[xvi].

It took some extra hours for sure, but it had that Hackathon vibe where everyone was just “I’d love it if we can get it to do this”.

You can google and find a huge number of tutorials on how to build an AI service and think it’s easy but none of them deal with the complexities of building something like ours.

An AI service that is only hosted in the UK and doesn’t retain or train on any user queries or responses (not even logged).

Uploaded documents at the user and organisation level for Retrieval Augmented Generation (RAG) that are completely secure and not even readable by us.

I mean obviously we’d love to use the queries and documents for analytics, but we can’t!

Hang on I was supposed to be free to art?!

Let’s finish on a note that my amazing “I dislike AI” wife would enjoy (she has to deal with people cheating with AI).

There is justified concern that the AI solutions that we are building are taking away all the lower end jobs, at least to start with. This goes against our utopian idea that AI would be our servants, and we would be free to just do the enlightened creative work.

AI can certainly impact the creative roles. In my university days I would read poetry and produce my own amateur versions (heck I was published in my Student Union paper so I’m counting that as semi-pro). AI is so good now (“In the style of Shakespeare”) that I think all amateur productions would be suspect.

But we need to look at this a little zoomed out, away from individual people's jobs (or budding poets). Out into the Philosophical future of humanity.

I think we now know that those art teachers who said go experience as much art as you can and you will find your own style were right. They just didn’t imagine a student who could look at ALL art and replicate its style perfectly. Or go read all poetry and write the works Shakespeare would have written about Red Dead Redemption 2.

I still speak to people, and they think AI has a copy of copyrighted works internally that they use to make the new AI art. But no, it is just a perfect student that is capable of creating new Sistine Chapels on the fly. And now we have Studio Ghibli[xvii] style art on tap.

As we climb out of the uncanny valley of AI generated art it will be much harder to tell it from human art. Although thankfully we can probably be done with terrible looking developer mock ups!

And when it comes to code, we now have a new sword fighting[xviii] excuse of “Waiting for Claude”. Well at least until Devin[xix] ends up doing that for us.

But I think the reaction to the flood of AI content will be much like the Organic movement in food. People will value those words imperfectly written by the human hand. I know for sure, for myself, already if I detect it’s AI content I don’t bother reading more (TAIDR!).

And it may well be like that scene in Terminator[xx] where they have dogs to detect inorganic “content” and I certainly hope our organic humanity will fare better.

I mean at least until the singularity and then all bets are off!

 

Disclaimer: I did not use AI to help write this, so all mistakes are my own!

Curious for what OneAdvanced has to offer? Check our Career Page and find your next professional challenge.



[i] Mona Lisa Overdrive, William Gibson 1988

[ii] Everybody's Autobiography, Gertrude Stein 1937

[iii] Red Dead Redemption 2, Rockstar Games 2018

[iv] Video Games, Tenacious D 2023

[v] https://schoolofcode.co.uk/

[vi] https://www.merriam-webster.com/slang/skibidi

[vii] Sonic X TV Series 2003-2006

[viii] https://aws.amazon.com/bedrock/

[ix] https://www.snowflake.com/

[x] https://aws.amazon.com/sagemaker/

[xi] The Alternate Bat Pattern as identified by Scott Carney in 2003

[xii]Dr. Martin Hiesboeck LinkedIn post 2018

[xiii] The Last Kingdom (TV Series 2015–2022)

[xiv] All the President's Men 1976

[xv] https://www.daysofwonder.com/game/ticket-to-ride-united-kingdom/

[xvi] http://oneadvanced.com/resources/uk-software-company-launches-first-private-sovereign-ai-for-business/

[xvii] https://www.ghibli.jp/

[xviii] XKCD Compiling https://xkcd.com/303/

[xix] https://devin.ai/

[xx] Terminator 1984

 

About the author


Nick Heap

Principal Software Engineer

Share