AI generated image of a squirrel jumping out of the way of a car.

AI-generated at Canva.com based on the prompt, “an impressionist painting of a squirrel leaping out of the way of an onrushing car in the street.”

I’ve been reading up on artificial intelligence – Time Magazine’s recent “100 Most Influential People in AI” and the Atlantic’s excellent piece on Open AI CEO Sam Altman among the many.

My book, “The Laser That’s Changing the World,” is apparently among the 183,000 pirated books being used to train Meta, Bloomberg, and other AI systems. These systems will know much more about the history of lidar than I do, seeing as I’ve forgotten so much of it. So it is with entities cursed with merely organic brains.

An Oppenheimeresque fear of the unforeseeable consequences of AI is more than an undercurrent in whatever one reads about the fast-advancing technology. You’ve got the inventors of modern AI – Geoffrey Hinton being the primary example – on through to major AI investors and practitioners essentially terrified of the prospects of unfettered AI. And for good reason. At the same time, there’s a mad rush to advance the technology toward human-brain-level capacity (~100 trillion neural connections, according to Hinton, who sees that happening within perhaps five years) and to profit from it. And, as with Oppenheimer, the same voices hustling the technology forward – extremely intelligent people – are those sounding the most unsettling warnings.

And there’s not really a clear idea about how one might fetter AI. You’ve got ideas as disparate as non-networked kill switches and Open AI cofounder Ilya Sutskever’s notion of impressing certain values (such as, for example, not desiring to exterminate humanity) on AI that becomes vastly more intelligent than us as the singularity barrels toward us. His model of how a less-intelligent being controls a more intelligent being is that of the human baby’s influence on its parents to keep it alive.

Before AI is capable of taking us out, it will certainly disrupt the economy – maybe grow it in some ways, but certainly displace a lot of “knowledge workers.” Such as, for example, myself. On the other hand, using ones hands in ways that AI helps dictate (or flat-out dictates) could inject new life – and relative social and economic status – to those who actually work for a living.

Nobody knows where this is going.

If you use a Google Home or Alexa device, everything you say can, and probably will, be used to train AI engines. Think of how incredibly lifelike all that intonation and vocalization data will prove to be. The ability of these systems to enable human-factor-driven hacking will be profound. The capacity for misinformation and scamming becomes more or less infinite. We’re easily manipulated by much blunter intelligence, as Donald Trump’s rise and staying power has demonstrated.

In-person meetings may become the only way of knowing if you’re talking to an actual human being.

I think those who dismiss large language models like ChatGPT as “glorified autocorrect” that merely statistically discerns the appropriate next word are overestimating the human writer’s ability to do much more than that. Yes, we create outlines in advance sometimes, but often, just coming up with the next word based on what we just wrote while keeping in mind the topic at hand is pretty much all that we’re doing.

Anyone cranking out visual art or copy that isn’t pretty damn original and/or thoughtful is going to be out of a job, or perhaps resigned to rote editorial roles – as when I clean up a transcript otter.ai has done a pretty damn good job with. In art, sculpture may again become as preeminent as it was in ancient Greece.

The most interesting thing about the sort of superintelligence that AI will inevitably achieve is that, before it exists, we can’t imagine what it will engender. This is no different than with any paradigm-busting technology, be it controlling fire, shaping wheels, or developing semiconducting circuits. Except here, the creation is intellectually superior to us, the creators. We’re going to be like squirrels faced with the concept of automotive engineering. Like squirrels, we’ll mostly be concerned with not becoming road pizza.

I think Sutskever’s notion of imprinting on AI a value of appreciating its human values is great, and it dovetails with Elon Musk’s approach. (Per Walter Isaacson, “Another way to assure AI safety, Musk felt, was to tie the bots closely to humans. They should be an extension of the will of individuals, rather than systems that could go rogue and develop their own goals and intentions.”) But Vladimir Putin’s values are not Mother Theresa’s values.

How many milliseconds would it take artificial general intelligence (AGI) to consider the big picture of our crushing environmental footprint and the degradation it causes to take a dark view of human values? Yes, we may be the universe’s way of understanding itself. But will AI simply see itself as the logical successor in this heady role we have bestowed upon ourselves?

It’s hard to imagine AI that would conclude that a population of nine billion humans (the estimated peak later this century) represents a sound balance from a planetary perspective. Perhaps our steps to mitigate climate change and address economic injustice are just as important in convincing future AI that we’re not irredeemable as they are to actually solving the problems themselves.

On the flip side, AGI could continually reconcile a dizzying array of variables to come up with political/geopolitical optimization strategies for solutions to our most complex issues (income inequality, climate change, migration from the Global South to the Global North…). But then, these suggestions will run into entrenched parties that will do what they always have done, which is to resist change that alters a status quo favorable to those interests. And what about political questions with no obvious answers no matter how smart you are, such as the Israeli-Palestinian conflict?

AI does seem like a pretty good reason behind the Fermi Paradox.

One thing we won’t need to worry about is AGI taking over in some overt, shoot-em-up Skynet fashion. If something superintelligent wants to take over/take us out, it will do so without our grasping what’s happening until it’s too late. That’s what we do with nettlesome “lesser” species, using tools as diverse as bear traps and ant baits.

I wonder about AI superintelligence’s impact on human motivation. If AI can do it – “it” being, over time, essentially everything that has historically required human though – better and faster, why put in the hard work of learning? If AI guides our every move, we become true meat puppets with hedonistic hobbies. What will the impact on already-declining birth rates be? Would you have kids in such an environment?

I hope to stumble upon this in a few years, having long since forgotten it, and chuckle at how wrong I was.

1 Comment

  • Duke Posted October 4, 2023 7:28 pm

    Could an AGI could be the perfect complement to an idiocracy such that we could provide future generations with some semblance of a working society despite failing to adequately educate them?

Comments are closed.