It turns out that regulations and expressions of governmental sentiment about AI aren’t only toothless, but they miss entire areas of development that should matter to all of us.
Consider recursive self-improvement.
Recursive self-improvement is the ability of a machine intelligence to edit its own code. Experts describe its operation with loads of obfuscating terms – goal-oriented design, seed improver, autonomous agent – but it raises a simple question: Imagine an AI that could purposefully change not only its intentions but the ways in which it was “wired” to identify, consider, and chose them.
Would an AI that could decide what and why it wanted to do things be a good thing?
The toffs developing it sure think so.
Machine learning already depends on recursive self-improvement, of a sort. Models are constantly expanded and deepened with more data and the accrued benefits of past decisions, thereby improving the accuracy of future decisions and revealing what new data needs to be added.
But that’s not enough for researchers chasing Artificial General Intelligence. AGI would mean AIs that could think as comprehensively and flexibly as humans. Forget whether the very premise of AGI is desirable, let alone achievable (I believe it isn’t on both counts); empowering machines to control their own operation could turbocharge their education.
AIs could use recursive self-improvement to get smarter at finding ways to make themselves smarter.
AI propagandists cite wonderous benefits of such smart machines, ranging from solving big problems quicker to providing little services to us humans more frequently and efficiently.
What they don’t note is that AIs that can change their own code will function wholly outside of human oversight, their programming adaptations potentially obscured and their rationales inscrutable.
How could anybody think this is a good idea?
Nobody does, really, except the folks hoping to profit from such development before it does something stupid or deadly.
It’s the kind of thing that government regulators should regulate, only they don’t, probably because they buy the propaganda coming from said folks about the necessity of unimpeded innovation (or the promises of the wondrous benefits I noted a bit ago).
Or maybe they just don’t know enough about the underlying tech to even know rhere’s a potentially huge problem, or too scared to question it because their incomplete knowledge will make them look like fools.
I wonder what other development or application issues that should matter to us are progressing unknown and unregulated.
If you ever thought that governments were looking out for us, you thought wrong.
[This essay appeared originally at Spiritual Telegraph]