New York Isn’t Regulating AI — It’s Regulating Trust

New York’s latest AI laws aren’t just about regulating technology. They’re about making trust visible, enforceable, and provable — from frontier models to synthetic humans.

New York Isn’t Regulating AI — It’s Regulating Trust

For years, AI regulation has been framed as a future problem.

Something we’d deal with once models became more powerful.
More autonomous.
More dangerous.

But New York’s recent moves make something clear:

The future problem is already here — and it’s not just about models.
It’s about trust.

When Kathy Hochul signed a set of AI-related laws in December — from the RAISE Act governing frontier models to new disclosure rules for AI-generated “synthetic performers” in advertising — the message wasn’t subtle.

New York isn’t trying to slow AI down.
It’s trying to make AI legible.

And that distinction matters more than most people realize.


Why this matters now

We’ve entered a phase where AI is no longer confined to back-office automation or experimental pilots.

It writes copy.
It screens candidates.
It recommends treatments.
It represents people who don’t exist.

In other words, AI is no longer just doing things.
It’s appearing, deciding, and influencing — often in ways users can’t see.

That’s the moment regulators care about.

Not because the technology is impressive, but because the asymmetry of understanding has become too large. One side knows how the system works. The other experiences the outcome without context.

New York’s laws reflect this shift. On one end, the state is asking frontier AI developers to document safety protocols, assess catastrophic risks, and report serious incidents. On the other, it’s telling advertisers: if you use a synthetic human, you must disclose it.

These aren’t separate ideas.
They’re two ends of the same trust problem.


The common thread people are missing

Most commentary treats these laws as isolated.

One is about “advanced AI safety.”
Another is about “deepfakes in advertising.”

Step back, and a pattern emerges.

New York is regulating where AI breaks social expectations.

When a system:

  • Looks like a human
  • Acts autonomously
  • Influences high-stakes decisions
  • Or operates at a scale humans can’t easily challenge

…the state steps in to require disclosure, documentation, and accountability.

Not perfection.
Not bans.
Not moral grandstanding.

Just the ability to answer a basic question:

What is this system doing — and how do you know it’s safe?

That’s not AI regulation in the abstract.
That’s trust regulation in practice.


What most organizations still get wrong

Inside many enterprises, AI governance is treated as a policy exercise.

Write some principles.
Publish an ethics statement.
Run a training session.

Then get back to shipping.

The problem is simple: trust doesn’t live in principles.
It lives in operations.

When regulators ask for disclosure, they’re not asking for a PDF.
They’re asking whether you actually know:

  • Where AI is used
  • What risks were considered
  • What controls exist
  • What happens when something goes wrong

In marketing teams experimenting with AI avatars, this often surfaces as surprise:

“We didn’t realize the tool generated a synthetic performer.”

In product teams deploying models:

“We evaluated accuracy, but not misuse or downstream impact.”

In procurement:

“We assumed the vendor handled compliance.”

None of these answers hold up once trust becomes enforceable.


The quiet shift from transparency to evidence

There’s another subtle but important change underway.

Regulators are moving past transparency as intent toward transparency as proof.

Take the RAISE Act. It doesn’t just ask large AI developers to say they care about safety. It requires them to document how risks are handled, report critical incidents, and be prepared for oversight.

Similarly, the synthetic performer law doesn’t ban AI-generated humans. It simply requires that their use be disclosed — clearly and consistently.

In both cases, the expectation is the same:

If you can’t show it, you don’t really control it.

Anyone who has lived through earlier regulatory shifts will recognize this pattern — data privacy, financial controls, cybersecurity.

First come principles.
Then policies.
Then logs, evidence, audits, and accountability.

AI governance is now firmly entering that third phase.


A clearer way to think about AI governance

Rather than asking, “Is our AI ethical?”
A more useful question is:

Where could trust reasonably break — and can we see it happening?

That leads to a different mental model.

AI governance isn’t a single control.
It’s a chain:

  • Visibility: Do we know where AI is used?
  • Disclosure: Are people aware when they interact with it?
  • Risk assessment: Have realistic failure modes been considered?
  • Controls: What limits or safeguards exist?
  • Evidence: Can we prove any of the above after the fact?

New York’s laws touch different links in this chain, but the direction is consistent.

Trust must be operationalized, not asserted.


Practical implications for leaders

If you’re a CIO, CRO, CISO, or AI leader, this moment is less about New York specifically and more about what’s coming everywhere.

A few implications are already clear:

Marketing and communications teams are now part of AI governance.
So are procurement and vendor management.

Disclosure obligations will surface before deep model audits — because that’s where users feel deception first.

Expect “prove it” questions from regulators, boards, and insurers — not philosophical debates.

And perhaps most importantly:
AI risk will increasingly be judged by impact on people, not by model architecture.

That’s why synthetic performers matter as much as frontier models. Both shape perception, behavior, and trust at scale.


Where this leaves us

New York isn’t trying to win the AI arms race.

It’s trying to prevent a trust collapse.

By regulating how AI shows up in the real world — how it’s disclosed, documented, and explained — the state is signaling what the next phase of AI maturity requires.

Less magic.
More clarity.
Less “trust us.”
More “show us.”

For organizations building and deploying AI, this shift is uncomfortable — but necessary.

Because in the end, the most important question about AI isn’t how advanced it is.

It’s whether people can trust it — and whether you can prove they should.