President Trump signed an order in late January to rescind a requirement that government avoid using AI tools that “unfairly discriminate” based on race or other attributes, and that developers disclose their most potent models to government regulators before unleashing them in the wild.
“We must develop AI systems that are free from ideological bias or engineered social agendas,” the order said, as it introduced biases of misjudgment, error, stereotyping, and the primacy of unfettered and unaccountable corporate profitability into the development of AI systems.
A small group of crypto and venture capital execs has been tasked with making sure that whatever new rules emerge are dedicated to the New Biases and free from the Old Ones, so nothing to worry about there.
I was never a big fan of using potential discrimination or bias as the lens through which to understand and grapple with AI development. After all, there are laws in place to defend individual rights however defined, and a computer system that gets something “wrong” isn’t the same thing as taking a purposefully punitive action.
We could end up with AI systems that deftly avoided any blunt associations with race or gender that still made difficult if not overly cruel decisions based on deeper analyses of user data.
The scary part of AI was never that it would work imperfectly and therefore unfairly, but that it will one day work perfectly and thereby put all of us under its digital thumb. There’s nothing implicitly fair about our lives being run by machines.
But at least it was an attempt at oversight.
The worst part of the new administration’s utter sellout is that it enshrines the risk inherent in AI development as something we users will bear entirely.
The President’s order declared that it will revoke policies that “act as barriers to American AI innovation.” To technologists and their financial enablers, that means any rules that attempt to understand, keep tabs on and, if necessary, try to mitigate harm to people and society.
This ideology — summarized in the glib phase “fail fast” — means that innovation only happen when it’s unfettered. Any problems it creates or discovers thereafter can always be fixed.
Only that’s a lie, or at best as self-fulfilling prophecy.
Just think of the harm caused by social media, both to individuals (and teens in particular) and our ability to participate in civil discourse. How about the destruction of the environment caused in large part by the use of combustion engines?
Technologies are supposed to disrupt and change things and there’s no denying the benefits of transportation or online access, but had we taken the time to consider the potential negative effects, however imperfectly and incompletely, could we as individuals and societies have lessened them?
Once adopted, AI’s functional impacts here and there might be improved but its presence in our lives will not be fixable. Its advocates know this and they’re betting that any or most of its benefits will accrue to them while its shortcomings are borne by us.
This is perhaps the worst bias of all, and it’s now our government’s policy.
Oh, and how about buying some crypto while I have your attention?
0 Comments