by Katie Heffring
Ever since I read Aldous Huxley’s A Brave New World, science fiction has always fascinated me. I can’t help it. I prefer the Kardashev over the Kardashian. I love the perfect blend of imagination and science. But I also love how a lot of ideas from Sci-Fi authors, physicists and astronomers of the past have materialized. So let’s expand our minds, because making predictions about the future is always fun.
Here’s my prediction for this doozy of a question:
Most Sci-Fi stories I’ve read deal with one or the other. And in most cases, AI superintelligence seems to happen before interstellar travel. Therefore, I’m going to make two assumptions: first, world domination of artificial intelligence and interstellar travel will happen, and second, AI superintelligence will happen before interstellar travel.
According to Nicolai Kardashev’s scale of advanced civilization, the technology for ramjet fusion engines or antimatter spaceships, and thus interstellar travel, won’t be available for another 100 to 200 years. By that time, civilization will have mastered planetary energy, language, communication, culture and the economy, and achieved ecological balance. (I am also going to assume that manned solar sail spaceships without ramjet or antimatter engines are too risky and too slow for round-trip interstellar travel. And so some sort of hybrid ship would make more sense).
So with this in mind, will AI superintelligence then develop within the next fifty years? I think so. When I Googled “Artifical Intelligence projects,” wiki gave me a huge list, and those are just the nonclassified projects. Ok, now I’m dipping into reality here. But I can’t help it. AI technology is advancing quickly. A computer just beat a professional human player at Go! for the first time, a feat considered unreachable in this decade, two months ago!
Anyway, remember good Sci-Fi is a blend of both imagination and real science.
I think it’s time for some deep reflection. Are you afraid of a society controlled by AI? Over the past few years, prominent scientists and computer whizzes warn us to take heed. But consider the advice of Alex Garland, director of Ex Machina,
In very broad terms, human behavior is frightening when it is unreasonable. And reason might be precisely the area where artificial intelligence excels. . . . The investigation into strong artificial intelligence might also lead to understanding human consciousness, the most interesting aspect of what we are. This in turn could lead to machines that have our capacity for reason and sentience, but different energy requirements and a completely different relationship with mortality. That could mean a different future. A longer future. In which case, we could rephrase the warnings of Mr. Hawking and Mr. Wozniak. Where they say that A.I. will spell the end of humans, we could say that one day, A.I. will be what survives of us.
Then just perhaps civilization will not achieve planetary balance and interstellar travel without the help of AI. Just go with the flow!
While not writing articles for Ballz, Katie is either freelance editing (firstname.lastname@example.org) or imagining the future.