Salesforce, Python, SQL, & other ways to put your data where you need it

Need event music? 🎸

Live and recorded jazz, pop, and meditative music for your virtual conference / Zoom wedding / yoga class / private party with quality sound and a smooth technical experience

Securing authenticated agentic AI

15 May 2025 🔖 security
💬 EN

Table of Contents

I found 3 excellent blog posts by Stytch about the intersection of traditional IT concerns and AI agents:

  1. How to think through enhancing legacy workloads to be able to be used by customers’ and employees’ AI agents”:
    • The age of agent experience
    • Key point:  the workloads might need to become OAuth providers even if they previously weren’t, so that they can specify appropriate authentication (“authN”) & authorization (“authZ”).
  2. Intrusion detection patterns in the age of AI agents:
  3. Authorization (“authZ”) for AI agents:

Honestly, the whole blog looks interesting, at a glance.

Of course, every article ends saying you should buy their authentication service provider product.

But overall, they seem to have a pretty great team of writers thinking about big-picture architecture, observability, & IAM questions in the age of AI – both “feature”-wise and “defense”-wise.

I found it after Googling a term Netlify came up with recently: “agent experience,” or “AX,” which is meant to follow on the concepts of “user experience” (“UX”) and “developer experience” (“DX”).

We need better authZ, fast

One thing I didn’t see right away was the idea of how to refactor an API so that, for example, all ResourceA.read-scoped transactions would be fine for a chatbot to do on behalf of User #123 as soon as that human authorized their chatbot to work on their behalf in that way. But so that even once User #123 also gave their chatbot permission to assume their identity for ResourceB.write-scoped actions, the human would be required to manually sign off on each and every single attempted write transaction against Resource B that the chatbot wanted to attempt on their behalf.

  • Some research I did indicated that the API should probably be refactored to have a “pending transactions” queue, and that some sort of out-of-band human approval step (it couldn’t just be the human saying “I approve” in the chatbot, or the chatbot might learn to lie to the API on the human’s behalf) could be the trigger to release transactions for actually happening.
    • Which made me think that if that’s how we’re gonna write APIs from now on, wow, we’re gonna need some standards for returning to where you left off, the way OAuth has a redirect URL syntax.
    • And we’re gonna need granular, intuitive controls, the way phone operating systems always let you deny/revoke certain types of permissions from an app, letting you break certain functionality at your own risk if you want, but you’re still allowed to decide that your texting app can’t access your camera, if that’s meaningful to you, even though most people would probably leave it on at all times. And the way phone operating systems make it pretty obvious how to do it (e.g. “always ask” / “only this time” options).
      • Heck, we don’t even have enough of this as it is in conventional OAuth. I passionately hate the way GitHub doesn’t let organization owners cherry-pick which of a GitHub App’s desired scopes will actually be allowed. (It’s 2025, and it’s still like the bad old days of all-or-nothing phone app permission approval.)
    • The research chatbot suggested “Open Human Approval Protocol,” which cracks me up because “O.H.A.P.” reminds me of the word “mishap,” which seems appropriate.
  • That said, a company called Descope seems to have come up with a much catchier name for the concept I’m mulling over: “progressive scoping.”

It’s a shame that the OAuth spec doesn’t have a way to make it so that you can have certain scope grants time out faster than others. e.g. leave ResourceA.read renewable w/o human intervention for days, but make ResourceB.write require human reapproval after 20 minutes.

Today’s authZ safety nets seem made for humans being the only source of nondeterministic action (that is, potentially exhibiting different behaviors on different rounds of performing an action).

Sure, we let computer scripts OAuth into our accounts and run at midnight while we sleep, but we’ve counted on them being programmed in a deterministic way (that is, given a particular input, they will always produce the same output).

Heck, humans are actually pretty deterministic. We presume outsiders are always trying to hack us. We presume insiders could try to hack us but generally aren’t going to get frustrated and keep trying harder if we don’t let them.

Anyway, authorization is going to have to be so different when machines suddenly can behave unpredictably!

Yikes. I don’t think a lot of us are ready to protect humans from surprises (nondeterminism) when they start trying to let chatbots help them do things!

Service Design for AI: Why Human Experience (HX) Matters gets at the broader concept behind why I’m horrified by authZ and what I meant when I said that humans are “pretty deterministic” within their day-job contexts.

“Disintermediation in AI-mediated systems is the process by which people become separated from direct relationships, capabilities, and decision-making as AI interfaces interpret and act on their behalf. For example, when AI chatbots become the primary way to access customer service, people don’t just lose direct contact - they lose the ability to express needs in their own terms, must learn to communicate in ways the AI understands, and lose access to human judgment in complex situations.”

People are going to start using agentic AI tools to help them achieve efficacy at tasks they already didn’t quite understand how to do. Part of our responsibility, as maintainers of systems that their tools are going to interact with on their behalf, is to lock down those systems’ authZ in ways that help improve the human experience – that don’t leave humans wondering why everything’s broken once they start using their tool against our service (which we might’ve designed while knowing about the existence of deterministic user-helper tools, e.g. web browser extensions, but deterministic machines are no longer all that our users might be bringing to help them use our apps/APIs/etc.).

--- ---