AI That Thinks
AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.
AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.
CES aims to present a view of the world that rises to the level of self-fulfilling prophecy. Rest assured that the media coverage of it this week will glowing reflect that certainty and its subtle impacts will be felt in the how AI is talked about until, well, next year’s event.
What the tech types call “inefficiency,” I call “experience.” The label obscures the fact that our exchange the freedom to make decisions, right or wrong, will be a trade that is irreversible long before we know its cost.
AI recursive self-improvement is the kind of thing that government regulators should regulate, only they don’t, probably because they buy the propaganda coming from said folks about the necessity of unimpeded innovation (or the promises of its wondrous benefits).
An AI in the recording studio isn’t an equal collaborating partner but rather an advanced filing system.
If the mysterious intersection of thought and language is the algorithm for AI, is it reasonable to expect that it will somehow emerge from processing an unknown critical threshold of images?
Maybe our current approach to seeing AI as like the tulip or South Sea crazes, or an imperfect technical tool like an Edsel or laser disc, is indeed our era’s latest bubble, or bubbles.
Not only can AI write in the style of famous poets, but it can also improve on their work.
AI already controls our access to information online, as everything we’re shown on our phones and computer screens is curated by algorithms intended to nudge us toward a particular opinion, purchase decision and, most of all, a need to return to said screens for subsequent direction. I wonder how much of our history, especially the most recent years in which we’ve begun to transition to a society in which our intelligence and agency are outsourced to AI, will survive over time.