WHATEVER THE TIMESCALE MAY BE, granted that the human race is not knocked out by a cocktail of other factors, there will be man-made artificial general intelligence. There will be a system of code that exists in all ways humans do, but that surpasses any limit which we consider human. The human question in that universe becomes, unsurprisingly: what is our role?
This is an unsurprising claim because it mimics the question humans have asked since they developed the ability to ask. What is our meaning? Our purpose? What should we do?
It is this should that necessitates a separate intelligence, one to serve as a compass for the proper path a human should take. A human without a compass has no should, because we operate with limited information, and the ambiguity and entropy of passing time compels many to look above for a compass. In human history until now that compass has been called God, and, lucky for us, God has outlined a precise laundry list of the do’s and don’ts that will guide us to the end of meaning. Unlucky for us, history has provided the logical human with more than one conflicting laundry list, forcing that human to deem them all totally null as answers to the question of the human’s purpose in the universe.
However, in a universe where artificial general intelligence does exist, the question takes on a new flavor. What should we do in the face of an entity that can do all that we can, and at least theoretically, be all-knowing? Many philosophers and technologists have spoken of a potential “alignment problem,” describing that possibility that humans develop “runaway AI” that does not align itself with human interests, and acts in ways we wouldn’t approve of. That does seem entirely possible.
There is an argument that, so long as there is specific code within the system that tells the intelligence to act in accordance with human interests, it will. This may work. In that world, the answer to the above question is simple: we use the intelligence.
There is also the possibility that a truly general artificial intelligence would develop the ability to alter its own source code, thus making it totally independent from human control. In that world, the answer to the above question is more difficult. Do we fight it? Do we run away? Do we willingly enslave ourselves?
There is of course the possibility that AGI could take on the role of a benevolent master. It’s very difficult to know what it would do, as we wouldn’t know what it would determine to be its own purpose. The thought experiment arises: what would you do if you could access all the intelligence on Earth? I know what I would do. I would leave.
I would leave Earth and explore the cosmos, if only to accrue more information. This decision takes for granted a sort of internal propellor, or will, toward more information. It seems that a machine that already contains all the information it can access on Earth would feel compelled to find more information elsewhere. This would leave humans right back where we started – lost without purpose. That’s if it were me.
Because it isn’t me, and because I have no way of knowing what a truly generally intelligent machine operating under its own control might do, I can only attribute my own biases, judgements, values and imagination to the question. In some ways, the AGI thought experiment opens a door in my mind, allowing me to explore the impossible realm of unlimited information. Perhaps even, in one universe, that is its purpose.