Hello. I’m Tom Hillman. I’m building the stuff I wish existed before the awkward questions hit.
I’ve spent the last half decade in regulated cyber security. This site is about deploying agentic AI without losing auditability, accountability, or trust.
I help security and GRC leaders in regulated SaaS deploy and operate agentic AI that stays auditable and trusted.
I’ve spent the last half decade working in cyber security in a global enterprise, regulated business. A lot of that time has been the unglamorous part of the job: bank security questionnaires, audit evidence requests, incident calls where everyone is tired and slightly defensive, and the slow work of getting engineers to change behaviour without making it personal.
That tension is about to get sharper.
Agentic AI is turning into the way work gets done: triage, investigations, customer comms drafts, case management, evidence assembly, and sometimes decisions that affect customers. Sometimes it will be part of your product. More often it will be part of how you deliver your product.
I’m excited about that. I’m also aware of the trap: the feature works, but the proof does not. I’ve watched teams build something genuinely useful, then burn a week because they could not answer simple questions under pressure, what did it do, what did it touch, who approved it, can we reconstruct it end to end.
If your evidence lives in Slack, or a non-minuted meeting, it does not exist.
What you’ll get here, and what you won‘t
I’ll aim to publish one post a week.
Each post is built around something you can reuse: a checklist, an evidence pack outline, a decision tree, a set of due diligence answers, or comms templates. Things you can paste into an internal doc or a questionnaire from a banks vendor assurance team and not feel embarrassed.
What you won’t get is hot takes, vague “AI governance” commentary, or content that exists mainly to sound clever. If a claim cannot survive contact with audit, it does not belong here.
Why this site exists
Because I can already see what is about to happen in regulated SaaS.
We will deploy agents into workflows. Then a bank, customer, or internal audit team will ask: what did the system do, what data did it touch, who approved the changes, and can you reconstruct it quickly. Can you produce an audit trail without a week of people searching and guessing.
In many organisations the honest answer will be “not really”. That gap turns into a long, painful thread with audit, risk, engineering, and the customer, at exactly the wrong time.
The thread I’m pulling on
In regulated environments, trust is not your tone of voice. It is your ability to produce proof without improvising. The evidence should speak for itself.
So the core of this site is simple: deploy and operate agentic AI that stays auditable and trusted, with practical controls and reusable evidence.
What I’m writing first
I’m starting with the Evidence Pack because it’s the fastest way to stop hand-waving and start proving.
Over the next few weeks I’ll publish practical building blocks, including:
An AI evidence pack you can hand to auditor or a bank’s supplier assurance team (and keep current without hating your life).
Decision integrity for high-stakes exceptions and human overrides.
Change control for prompts, models, tools, and connectors (where risk creeps in quietly).
Agentic workflow guardrails: permissions, boundary objects, and a kill switch that actually gets tested.
AI vendor risk for regulated SaaS: the questions that matter, the red flags, and what compensating controls look like in real life.
AI incident response when the model is part of the incident, including comms templates you can reuse.
I’ll also publish a small starter kit so you can copy paste the minimum version and improve it over time. When it is live, it will sit at /evidence-pack-starter-kit/.
Newsletter
If you sign up you’ll get one email a week. Practical controls and reusable evidence. Unsubscribe in one click.
If you subscribe, you’re not going to get a drip sequence. You‘ll get the building blocks as I publish them, plus the occasional condensed template that is easier to paste into internal docs.
A quick favour, if you’ve got 20 seconds…
If you are already being asked awkward questions about agentic AI, reply below with the hardest. Sanitise it, then copy paste it. If enough people hit the same friction, I’ll turn it into a post and an artefact you can reuse.
If someone on your team owns audit or due diligence for AI, feel free to send this to them.
Thanks for reading. If this is the problem you are facing or can see coming down the road, you’re in the right place.