I have a theory: It wasn’t capitalism and democracy that won the Cold War. Popular Science won the Cold War.
Popular Science and Popular Mechanics magazines — as well as other journals and magazines that took an awe-inspired, jaw-dropping look at science and technology — paid particular interest to military technology developed by Soviet block engineers in the 1950s and 1960s. The stories typically depicted Soviet military might as growing and unbeatable.
Sort of like runaway artificial general intelligence (AGI).
Soviet tanks had better armor.
Soviet planes were faster and more maneuverable.
Soviet subs dived deeper and plowed through the water more silently.
Soviet nuclear ICBMs were poised to strike more accurately and more powerfully.
(A great place to check out the above claims is the Popular Science Archive Search.)
We can argue how the military industrial complex easily co-opts this fear. (I read once that the CIA would leak exaggerated claims to stoke the Cold War fires.) But, let’s save that for another day. The point is, that these unsubstantiated and — in the clear view of hindsight — exaggerated claims of Soviet block military might prompted Western engineers to design equipment that was more advanced than even these magazine’s fantastic visions of threatened military dominance. Stealth technology and global positioning systems are just a few of the way-out technology that sprang from this era of paranoia.
So, how does this relate to advanced AI and AGI?
In the debate between Evil AI and Benevolent AI, the evil side offers a grim assessment of the technology. Advanced AI has much more power to wreak destruction on the world than a pack of marauding T-72 battle tanks tearing into Western Europe through the Fulda Gap.
One scenario: An advanced form of AI would simply see humans as a virus and eradicate us.
The best case scenario for AI un-enthusiasts is that the AI will capture us and treat us as pets.
Will that happen?
Are there scenarios where these AI nightmare don’t come true?
I’m not the best odds maker, but I can make an educated guess that the odds are about even for a transition to Benevolent AI, or, at least, Indifferent AI. For instance, incredibly-advanced AI, able to tap limitless resources in ways we might not even imagine, would probably not consider mere humans as competition. Why would they eradicate us? And human pets? We would make horrible pets. I’m sure any AI worth its silicon (or graphene) would rather watch paint dry on the holodeck.
Positive AI-backers also suggest its more likely that humans will interface with advanced AI, not let it off its leash, so to speak.
So, with all things somewhat equal, what’s the best policy?
The best strategy when dealing with the first waves of powerful AI, which seem to have already hit the shore, is to prepare for the worst — and design for the best. As long as fear doesn’t become debilitating, a healthy paranoia about the destructive capabilities of AI cold help create systems that are obviously safer and possibly even more advanced than systems that disregard negative scenarios.
And, at least now, the Russian and American engineers can be on the same side.
About the Author:
Matt Swayne is a blogger and science writer. He is particularly interested in quantum computing and the development of businesses around new technologies. He writes at Quantum Quant.