---
title: "Podcast — Episode 79: Agentic AI adoption and governance"
date: "2026-04-28"
excerpt: "Joshua Rubens and I cut through the agentic-AI hype on the latest Leading IT — what's real, what's marketing, and a practical playbook for Australian IT leaders standing up AI agents in 2026."
tags: ["AI", "Governance", "Podcast"]
author: "Tom Leyden"
---

The new episode of **Leading IT — APAC Insights** is live. Joshua Rubens and I sat down to talk through where agentic AI is genuinely earning its keep inside Australian businesses — and where the vendor pitch is still well ahead of the operational reality.

> **Listen here:** [pod.co/leading-it-apac-insights](https://pod.co/leading-it-apac-insights)

This isn't another "what is an AI agent" explainer. We assumed the audience knows the basics and went straight to the questions IT leaders are actually asking us in 2026: *what's working, what's failing, how do we govern it, and how do we tell the difference between an "agent" that's a real productivity lever and one that's just a chatbot in a hat.*

## What we covered

**1. The genuine adoption pattern that's emerging.**  Most AU mid-market firms experimenting with agents in 2026 are landing in the same three places: customer support triage, internal knowledge retrieval, and back-office process automation (invoicing, reconciliations, onboarding). Not the moonshots vendors are selling.

**2. Why "agentic" is being abused as a label.**  A lot of what's being marketed as agentic AI in 2026 is just a better chatbot, or RPA with a language model glued to the front. We talk about how to interrogate vendors past the marketing.

**3. The governance shift that no one is paying enough attention to.**  Once agents can take actions in your environment — write to systems, send emails, file tickets — the governance question changes shape. Audit trails, action approval, blast radius, decision provenance. CPS 230 compliance for agents is a category that barely exists yet, and APRA is asking the right questions.

**4. The practical playbook for getting started.**  We landed on a four-step approach we've seen work: pick one painful repetitive workflow, write the human process down, build a single agent that does just that one job, measure it for a month before expanding. Boring, sequential, and the only thing that compounds.

**5. Where humans still have to stay in the loop.**  Anything irreversible. Anything client-facing without review. Anything where the cost of a confident-but-wrong answer is higher than the cost of waiting for a human. Tom's heuristic: *if you can't easily undo what the agent did, the agent shouldn't have done it autonomously.*

## The honest version

Agentic AI works — we know because we use it ourselves to run parts of Red Yellow Blue. But what works in our shop took twelve months of iteration, governance design, and retiring approaches that looked good on paper. Most firms launching agents in 2026 are skipping the iteration phase and wondering why the pilot stalled.

If you're an IT leader sizing up AI agents this quarter, the conversation in episode 79 is a useful sanity check. And if you'd rather have it as a 30-minute call rather than a 45-minute listen — [book a discovery call](/contact-us/) and we'll cut to your specific situation.

---

**Listen to Episode 79** on [Leading IT — APAC Insights](https://pod.co/leading-it-apac-insights). The show covers IT strategy, AI, cloud, security, vendors, and people leadership for CTOs, CIOs, and IT leaders across the APAC region.
