Boston Dynamics has revealed that it has figured out how to teach its old four-legged robots new tricks.
Without human help.
The technique is called reinforcement learning, which every human being relies on shortly after birth to teach ourselves how to stand, avoid walking into walls, and scratch itches if and when possible.
AI uses it, too, as the large language models driving ChatGPT and its many competitors assess what answers to queries work best and then adjust their models to favor those replies next time.
Boston Dynamics is a pioneer in mobile robotics, its videos and trade show demonstrations of skittering headless dogs announcing by example the robot takeover of the world years before Sam Altman took credit for the threat. Accomplishing that movement in physical space, especially the more complicated ones, took laborious human coding and/or control, as well as real-world training.
Now, it seems that the company has figured out how its two and four-legged robots can speed past our fleshy, limited concepts of preparation and practice and improve their coding so they’re ready to do better when next they’re turned on.
Just think if dreaming of being a world-class ballerina or finesse hockey skater was all you had to do to become one.
The technology is as frightening as its fascinating, insomuch as there’s ample evidence of AI’s teaching themselves how to cheat to win games, cut corners on tasks, or simply make shit up.
Turns out that programming machines to be moral and ethical is just as hard as it is to do with people, so good luck cracking that code. It will be fascinating to witness all of the strange and potentially threatening things the robot dogs and humanoids decide they’d like to do.
As frightening as that prospect also sounds, is not what scares me most: Like AI development in general, I’m worried about what happens if Boston Dynamics’ new training approach works flawlessly?
The company’s robot dog (named Spot) is already in commercial use, primarily on construction and industrial sites. Robots from other manufacturers are at work in other conditions that Stanford University describes as the “Three D’s” of dull, dirty, and dangerous, to which I’d add a forth: devoid of people.
Nobody wants to stand too close to a machine that could errantly send a metal arm through their heads.
But if robots can teach themselves to move as flexible and fluidly as living things (with the awareness to do so in any situation), then the floodgates will open up for putting them into everyday life.
Grocery shopping. Dog walking. Child or elder care.
Scratchers of itches.
This makes the business case for Boston Dynamic’s reinforcement learning plans bluntly obvious, but what’s less clear is what it will mean for the qualities and values of our lived experiences, especially since self-improving robots won’t just get as good as we are at walking or juggling (or whatever) but better than us.
Their capabilities will teach US how to become dependent on them.
And then we’ll have to teach ourselves new tricks.
Turns out we’re the old dogs in this story.
[This essay appeared originally at Spiritual Telegraph]