2 Comments
User's avatar
Julien Pelc's avatar

Agree, Auditing millions decisions and AB testing is not scalable and would not make sens. What if the governance emphasises more on guardians and what really goals/outcomes means, which is more strategic and human-led (for now) and let the orchestration and testing simply led by AI. There is still lot of work at optimising what a goal is, very often too high level. That one of the area that can be further optimised thanks to these agentic ai systems IMO.

Expand full comment
Matthew Niederberger's avatar

I do imagine a future where more systems are integrated into the agentic layer, even bookkeeping, stockkeeping, supplier contracts, etc., so that it will be able to build campaigns around margins, supplies, and viability (and more). The question that you accurately raise is, where should governance focus on. I remember this Latin quote from long ago "Quis custodiet ipsos custodes?", who guards the guardians. If we can't humanly govern a concept and level of operation of agentic AI, can we build an AI solution to monitor AI... see where I am going? This loops back again to trust. Time will tell :)

Expand full comment