The First AI War Has Begun
If we talked about AI differently, maybe we’d come up with different decisions about it.
If we talked about AI differently, maybe we’d come up with different decisions about it.
Tech boffins met in DC to dictate AI regulation (or lack of it). Lawmakers are waiting to read the meeting notes on CompuServe.
“Rubbish in, rubbish out,” says noted creator of rubbish about AI.
We want kids to grow up to be learners, which probably correlates with abilities and willingness to listen, engage, and evolve in their personal and professional lives. Using ChatGPT in this context is like shipping ready-made dinners to a gourmet restaurant’s kitchen.
Belief in the authoritative and restorative powers of technology is a fantasy, and using it as an explanation for what is happening with AI at work, including its impacts on entry-level opportunities, isn’t remotely legitimate.
Like so many problems today, copyright lawsuits over music are a people problem and therefore not solvable with AI or any other technology; in fact, tech might actually make it worse in this instance by multiplying many times over the opportunities for theft and mimicry.
Haven’t you noticed that most companies spend a lot of time and money telling us crap that doesn’t matter, or that’s only sorta, kinda, maybe true? They’re all solving climate change and encouraging social justice, their products and services all united in purpose to make the world a better place.
AIs equipped with perfect knowledge (or it’s corollary, near-perfect big data analyses) will leave us with few divergent views to drive pricing or trades. Markets will become transparent transaction platforms, nothing more.
We can and should strive to ensure that AIs aren’t overly stupid or cruel, but the premise that we can build machines that don’t just operate efficiently but provide answers that please our ever-changing opinions about bias and truth is a fool’s errand.