The world’s big four accounting firms are developing “AI assurance services” to ensure their corporate clients’ AI tools are behaving properly and reliably.

At face value, it’s good news that somebody is willing to take responsibility for what AI tools do and don’t do (God forbid the tech companies that make them had to sacrifice some of their innovation carte blanche to help ensure they don’t destroy the world).

But it’s not terribly reassuring, for at least three reasons:

First, today’s certified AI buddy could become tomorrow’s mass murderer. Change happens fast and get shared and modified across connected systems, so there’s no telling what novel insight or behavior may appear, let alone where or when. The certification of an AI tool will be about as reliable as a neighbor of a school shooter who says on TV “he was a good kid…”

Snapshots in time are outdated the moment they’re taken.

Second, the big accountancies have a dismal record in the certification business; they’re regularly sued for issuing wrong or incomplete audits, and then there’s the not so small issue of their conflicts of interest providing lucrative consulting services to clients in order to help them pass said audits.

And audits are only as good as their criteria, as evidenced by how many companies have gotten various versions of “green seals of approval” for their rigorous compliance while otherwise continuing to pollute the planet.

Third, back to my opening point: Where the fuck are the tech companies that make this stuff?

In every other industry, the companies that make things are held legally responsible for the performance of those things. They aren’t required to ensure that stuff isn’t used improperly or to break the law, but they are expected to do their best to make those acts less likely and/or less damaging.

Do we expect pharmacies to certify that the drugs they sell won’t kill us? Would it make sense for grocery stores to review food to make sure it won’t poison us? Do car dealerships have to police the safety of the vehicles they sell?

No, of course not. Those are the responsibilities of drug companies, food growers and distributors, and car makers.

When it comes to AI tools, none of those practices or moral expectations apply.

An AI tool can be produced with known and unknown weaknesses, not to mention a built-in capacity to extend or add to them, and yet it’s someone else’s responsibility to make sure nothing bad comes of it. In fact, AI makers have recently retrenched from their commitment to testing and other aspects of “responsible” AI development.

So, is it reasonable to expect that accounting firms staffed with, well, accountants and not hoity-toity AI developers, and operating with potentially mixed or biased incentives, will meaningfully ensure that AI tools are safe?

Again, of course not. But why shouldn’t they cash in on the AI craze before the real accounting occurs?

[This essay originally appeared at Substack]

Categories: Essays